Qualitative data collected as audio-visual recordings are ideally transcribed as textual files for archiving and sharing. Consider transcribing conventions, instructions, guidelines and a template to ensure quality and consistency across a data collection.
Transcription is a translation between forms of data, most commonly to convert audio recordings to text in qualitative research. It should match the analytic and methodological aims of the research. Whilst transcription is often part of the analysis process, it also enhances the sharing and reuse potential of qualitative research data. Full transcription is recommended for data sharing.
Good transcripts have:
When planning the format / template for transcription, best practice is to:
Transcription methods depends very much upon your theoretical and methodological approach, and can vary between disciplines. A thematic sociological research project usually requires a denaturalised approach, i.e. most like written language (Bucholtz, 2000), because the focus is on the content of what was said and the themes that emerge from that.
A project using conversation analysis would use a naturalised approach, i.e. most like speech, whereby a transcriber seeks to capture all the sounds they hear and use a range of symbols to represent particular features of speech in addition to the spoken words; for example representing the length of pauses, laughter, overlapping speech, turn-taking or intonation.
A psycho-social method transcript may include detailed notes on emotional reactions, physical orientation, body language, use of space, as well as the psycho-dynamics in the relationship between the interviewer and interviewee.
Some transcribers may try to make a transcript look correct in grammar and punctuation, considerably changing the sense of flow and dynamics of the spoken interaction. Transcription should capture the essence of the spoken word, but need not go as far as the naturalised approach.
Reference: Bucholtz, M. (2000) The Politics of Transcription. Journal of Pragmatics 32: 1439-1465.