Content analysis
Content analysis is "a wide and heterogeneous set of manual or computer-assisted techniques for contextualized interpretations of documents produced by communication processes strictiore sensu (any kind of text, written, iconic, multimedia, etc.) or signification processes (traces and artifacts), having as ultimate goal the production of valid and trustworthy inferences."[1]
Though the locution "content analysis" has come to be a sort of "umbrella term" referring to an almost boundless set of quite diverse research approaches and techniques, it is still today in use in the Social and Computer Science Domains and in the Humanities to identify methods for studying and/or retrieving meaningful information from documents.[1]In a more focused way, "content analysis" refers to a family of techniques oriented to the study of "mute evidence", that are texts and artifacts.[2] Texts come from communication processes strictiore sensu (i.e. types of communication intentionally activated by a sender, using a code sufficiently shared with the receiver). There are 4 types of texts in content analysis: 1. written t. (books, papers, etc.), 2. oral t. (speech, theatre plays, etc.), 3. iconic t. (drawings, paintings, icons, etc), 3. audio-visual t. (tv programs, movies, videos, etc.), 4. hypertexts (can be one or more of the texts above, on the Internet).
On the other side, Content Analysis can also study traces (documents from past times) and artifacts (non linguistic documents), which come from communication processes "latiore sensu" - commonly referred to as "signification" in Semiotics (absence of an intentional sender, semiosis is developed by abduction).[1]
Despite the wide variety of options, generally speaking every "content analysis" method implies «a series of transformation procedures, equipped with a different degree of formalisation depending on the type of technique used, but which share the scientific re-elaboration of the object examined. This means, in short, guaranteeing the repeatability of the method, i.e.: that pre-set itinerary which, following pre-established procedures (techniques), has led to those results. This path changes consistently depending on the direction imprinted by the interpretative key of the researcher who, at the end of the day, is responsible for the operational decisions made».[3]
Over the years, content analysis has been applied to a variety of scopes. Hermeneutics and Philology have been using content analysis since the dawn of time to interpret sacred and profane texts and, in not a few cases, to attribuite texts' authorship and authenticity.[1][4]
In recent times, particularly with the advent of mass communication, content analysis has known an increasing use to deeply analyse and understand media content and media logic. The political scientist Harold Lasswell formulated the core questions of content analysis in its early-mid 20th-century mainstream version: "Who says what, to whom, why, to what extent and with what effect?".[5] The strong emphasis for a quantitative approach started up by Lasswell was finally carried out by another "father" of content analysis,Bernard Berelson, who proposed a definition of content analysis which, from this point of view, is emblematic: «a research technique for the objective, systematic and quantitative description of the manifest content of communication».[6] This was the product of a positivist epistemological context which is quite close the naïve realism that has long since become obsolete.[1] Approaches of this type are rising again due to the tremendous fertility of the most recent technologies and application within mass and personal communications. Content analysis has indeed come across huge amount of textual big data as a consequence of the recent spread of new media, particularly social media andmobile devices. Threats are represented by the fact that the complexity of the process of semiosis is not rarely underestimated and made banal whenever statistics is uncritically applied to large amount of analogic-native data. In such a case, the main problem stems from a naive use of measures and numbers as an always valid certificate of "objectivity" and "systematicity", though moving from the sharable principle to contain bad, offhand evidence-detached analyses spoiled by the «human tendency to read textual material selectively, in support of expectations rather than against them».[1][4]
Subscribe to:
Posts (Atom)