Semiotic theory is currently the frontrunner in my list of theories to apply to my dissertation research, as it seems it can directly be applied to information design based on theories from several leading researchers/writers in the field. According to Umberto Eco (1976), semiotics "is concerned with everything that can be taken as a sign. A sign is everything which can be taken as significantly substituting for something else. This something else does not necessarily have to exist or to actually be somewhere at the moment in which a sign stands in for it." (Eco, 1976) Due to the nature of icons and illustrations which are used to represent the subject/content in information graphics, this theory can be applied.
Eco was one of three "pioneers" of research into semiotics, alongside Sassure and Pierce. Each developed slightly varying theories, however all are important to consider here as I am just starting out in researching semiotic theory.
"As the study of signs systems, the basic aim of semiotic theory is to understand the structure of sign systems in relation to the way they convey meaning. Semiotics takes the view that signs can be organised in various media, to form texts that convey some kind of meaning. For example, Saussure posited that words, in order to convey meaning, consisted on two distinct parts. Firstly, the 'signified', that is the part of the word that pertains to its meaning and secondly, the 'signifier', which is the part of the words that is representative of that meaning (Saussure, 1996. p67)
Sassure considered the "signified" to be the trigger for the mental concept the sign represents, which is linked to the concept through "a set of experiences, impressions or feelings related to an object or situation". For example, the letters D, O and G will form the word "DOG", creating in our minds the picture of an animal of the same name. O'Neill (2008) writes: "Together, the signifier and the signified combine to become a sign. That is, a sign, according to Sassure, is what is experienced when someone comes into contact with a set of stimuli that can be equated to a mental concept…the signifier is the physical phenomena part of the sign and the signified is the meaning represented by that physical phenomena." (O'Neill, 2008, p67).
It is later pointed out that Sassure's theory as outlined above is too simple to link into the complexities of interactive media, but it is noted that "the concepts of syntagrams and paradigms are very useful in describing interactive structures such as interfaces." (O'Neill, 2008, p80)
Similarly, Charles Sanders Pierce "considered a sign to be made up of the representamen, the form in which the sign takes, and the interpretant, the sense made of the sign, and the object, to which the sign refers. The interpretation between the reprasentaman, the interpretant, the object is referred to as semiosis." (O'Neill, 2008) (which is defined by that dictionary we know as Google as "the process of signification in language or signs", or simply put as "sign process" in Wikipedia). What separates Pierce's approach to semiotics from Sassure's is looking at logic rather than linguistics.
The concept of phenomenology ties in with semiotics and semiosis, which was again pioneered by Pierce. He defines it as the branch of science that "ascertains and studies the kind of elements universally present in the phenomenon, meaning by the phenomenon that whatever is present at any time to the mind in any way." (EP. 2:259) There are three stages in phenomenology - firstness, which is the state of being just as is; secondness, which is a state of being where awareness; and thirdness, which O'Neill (2008) describes as "full blown semiosis" as it is "the experience of representational objects standing in for experiences of real objects". Sharp writes that "…thirdness is the domain of signification. The process of something standing in for another thing is managed by an interpretive mental process, including recall and recognition of those objects, and the meaning associated with them." (O'Neill, 2008, p67). Thirdness could be translated to information design and navigations in ways such as an icon representing the subject matter of the page it links to, colour connotations, and so on.
This brings us to the topic of icons, indices and symbols, which are also a part of Pierce's theory of semiotics. The first, icons, are described by O'Neill (2008) as such: "Essentially icons have features or qualities that resemble those of the objects they represent…all pictures, paintings and photographs are iconic because they attempt to faithfully represent a recognisable image of their subject." (p.70).
Indices do not literally represent an idea but rather 'indicate' it instead. O'Neill (2008) writes: "There is a direct link between the object and the sign…There is a clear connection between the signifier and the signified, the form and the content." (p70). Symbols - or symbolic signs - are "signs that refer to their objects by virtue of a law or socially derived rules that cause the symbol to be interpreted as referring to that object". (O'Neill, 2008, p.70)
Umberto Eco's Theory of Semiotics (1976) "proposes a theory of semiotics in terms of the use of signs as acts of coding and decoding messages with reference to sets of culturally defined conventions, or codes." (p71) Eco's theory centres around socio-cultural aspects more so than Sassure's or Pierce's. His work is "based on the notion that for a sign to be understood the reader has to be "in possession" of the correct code in order to interpret it. (O'Neill, 2008, p.71) An example the author uses is the use of the word "blue" and the subsequent connotations one might have to it - "…the rod blue might be encounters in relation to 'sky', 'grass' and 'feeling'. Each different word alters the meaning of the blue, offering different denotations and connotations." (O'Neill, 2008, p72) This is echoed by Kress and Van Leuwen (1966) who write that "These social codes... have to be able to manifest the particular social relationship between the producer, the receiver and the object represented (p.73)
Syntagrams are "combinations of signs that are put together in an organised way to form a meaningful whole." Sentences are considered syntagrams because they are "ordered combinations of signs written one after the other to produce a meaningful statement." Paintings, sculptures and even pieces of architecture can also be said to be syntagrams, and so "they exist as combinations of different shapes, forms and colours that are organised in different physical positions to produce some form of meaningful or aesthetic whole."
Paradigms, in semiotic theory, are a group of "signifiers or signifieds" that are linked to each other through another overarching umbrella. O'Neill (2008) links Sassure's description of paradigms to the processes in interactive media. "…interactive media interfaces are full of paradigmatic structures that are often articulated into sytagrams through user interaction. Buttons and hyperlinks often have different states that change as the mouse is rolled over them or when they are clicked. Often this signifies, to the user, some form of functionality or different meaning to its original state." (p78) Similarly, Louis Hjemslev's theory of semiotics includes theory on how "signs provide layers of meaning for users as they interact." This can be translated to interactive media because, as O'Neill (2008) writes, "Most significations in screen-based media tend to be designed to denote something, e.g a button to be pressed or a graphic to denote a file type." (p78) Paradigms, then, are particularly applicable to my own project as an app involves much of this.
Interestingly, Pierce's theory on iconicity contradicts this and says that "the notion of symbolic signs" are more important to interactive media. For example, an icon of windows on a screen "are not iconic representations of actual windows but symbolic representations of the window concept." (O'Neill, 2008, p79) This follows Pierce's perspective on semiotics which is based around phenomenology and signs, aka "signs as they are experienced" rather than Sassure's outlook on how signs function. It is pointed out by O'Neill that "The concepts of firstness, secondness and thirdness could prove to be crucial to a semiotics of interactive media that might manifest in many different ways from screen-based signification…". Importantly it is noted that thirdness is not the only basis for interaction, if we are looking at it from the perspective of Pierce's research.
The relationship between other elements of graphic design which come into play in interactive design are addressed, with regard to their importance in anchoring the information. "The static elements of interactive media, its layout, forms, font colours and graphics, play a huge part in establishing a frame of reference from which to engage with it's interactive elements. Without the static elements to guide us on screen, we would be lost in a maelstrom of interactive and dynamic elements." (p85)
There are many ways in which the elements of semiotic theory, as described above, can be used as a critical framework to my own stuff. The relevance of the theory to my project can be summed up as follows: "Essentially, then, screen based interactive media are extremely semiotic in character." (O'Neill, 2008, p87).
References
Eco. Umberto. 1976. A Theory of Semiotics.
O'Neill, Shaleph. 2008. The Semiotics of Embodied Interaction. London: Springer-Verlag London
Sharp, J. 2011. Semiotics as a Theoretical Foundation for Information Design. 2nd ed. Bloomington :Indiana University Press
O'Neill, Shaleph. 2008. The Semiotics of Embodied Interaction. London: Springer-Verlag London
Sharp, J. 2011. Semiotics as a Theoretical Foundation for Information Design. 2nd ed. Bloomington :Indiana University Press