Understanding Memory

17-03-2008 22:56 KALKI#1
Levels of Processing Theory

And Other Related Approaches To

Understanding Memory


Where Atkinson and Shiffrin Left Us Off:

Three distinct memory buffers with control processes moving information from buffer to buffer
Each memory buffer has unique characteristics that specify it as a distinct system
Codes/Representations
Capacity
Rate of forgetting
Notice that what's kind of cool about this is that each buffer has multiple characteristics that distinguish it from other buffers.
The Argument Against Multi-store Models

Problem: If that wasn't true, then it would be harder to justify theorizing that there are separate buffers. Afterall, if the only thing that distinguishes LTM from STM were the rate of forgetting, then you could just as easily say there is only one buffer and items just differ in how quickly they are forgotten.
But it turned out that when these issues were examined more closely that LTM sometimes preserved acoustic and other low level information (priming effects) and that short term memory was sensitive to both the meaning of the information and to visual information (Visual/Spatial Sketchpad).
As for capacity, the claim that STM holds 7 +/- 2 items depends on assuming that we're talking about chunks of information. But it isn't all that easy to specify what a "chunk" is. The reasoning becomes circular, if someone remembers 20 items on a short term memory task you can just say, oh they must have chunked the items into groups of three.
Levels of Processing As An Alternative To The Multi-Store Models

The Theory:

When information is processed it can be processed at varying levels of depth. Deep processing refers to fully analyzing information in terms of its meaning and importance. Shallow processing refers to processing information only in terms of its surface structure (sound, letters, etc.).
Depth of processing varies along a continuum. Processing isn't deep or shallow, rather processing is deep or shallow relative to other processing. You can think of varying levels of processing as falling into certain categories though:
Graphemic: The letters that make up the word
Orthographic: The shape of the printed word
Phonemic: The sound of the spoken word
Semantic: The meaning of the word
Deep processing leads to better memory than does shallow processing. Thus to the extent to which you analyze items in terms of their meaning you should do better at being able to remember the items.
How Could This Theory Be Used To Explain Previous Findings?

Previous researchers have found that short term memory tends to use an acoustic code and long term memory tends to use a semantic code.
You could explain this with Levels of Processing and eliminate the need for different stores altogether. Acoustic information is forgotten quickly (so it seems to be in short term memory) because its shallowly processed. Semantic information is forgotten slowly (so it seems to be in long term memory) because it is processed deeply.
Experimental Evidence For The Levels of Processing Approach

Most of the research investigating this theory has used a technique called incidental learning. In incidental learning tasks you are presented with items without being told you are going to be tested on them later. So the memory test that comes later is a surprise. This contrasts with intentional learning where you know ahead of time that your memory will be tested for what you are being presented with.
Why use incidental learning? Incidental learning is used because if you told people their memories were going to be tested later on, its likely they would try to deeply process the materials so that they would do well on the test. So you would lose the ability to compared deep processing with shallow processing. Everybody would be deeply processing everything.
In addition to using incidental learning, research on levels of processing also makes use of orienting tasks. An orienting task is a task that causes people to engage in a certain level of processing. (Does this word contain the letter E?; Does this word rhyme with nurse?; Is this something people keep money in?)
Example: Craik & Tulving (1973)
--Subjects were presented with a group of words in an auditory presentation

--They had a list of questions in front of them

--As each word was read, they answered the corresponding question

--Some questions emphasized the meaning of the word. Thus the orienting task for these words was a semantic orienting task. (SPEECH: Is it a form of communication?)

--Some questions emphasized the sound of the word. Thus the orienting task for these words was a phonemic orienting task (SPEECH: Does it rhyme with "each"?)

--Subjects were able to recognize 59% of the words that had been phonemically processed and 81% of the words that had been semantically processed.

This finding is really easy to replicate and tons of people since then have shown that semantic processing improves memory relative to other more "shallow" forms of processing.
Trouble in Paradise: Complaints Against Levels of Processing

Perhaps the biggest complaint made by some people was that there didn't seem to be any clear way of defining what one means by DEEP versus SHALLOW processing. It's a lot like the argument we talked about before with regards to chunking. Defining deep versus shallow could become circular because there doesn't seem to be any independent way of defining deep and shallow.
Another main argument against Levels of Processing Theory was that it didn't explain why deeper processing should lead to better memory.
A third complaint against Levels of Processing Theory was that it didn't do a good job explaining why different tasks that are presumably at the same level should produce different levels of performance.
Getting DEEPER into Depth of Processing

Remember that its important to think about memory as being a case where people are presented with certain cues (reminders) that help recover the memory.

One way of understanding why deep processing should lead to better memory is that deep processing may lead to a more elaborate mental representation. By elaborate we simply mean that the representation of the word becomes associated with a greater number of other things and a more coherent set of other things. Thus there should be more potential retrieval pathways ( a lot of different cues should all work)
A second way of understanding why deep processing should lead to better memory is that deeply processed items become more distinctively represented. Distinctiveness should help memory by decreasing interference. Remember the concept of release from proactive interference says that you can decrease interference from other items by making the new items distinctive from the old items. Making items distinctive means that the cues will match one and only one item.


Other Related Ideas Known to Influence Memory

Encoding Specificity

Encoding is the stage of a memory experiment in which things are being learned. Retrieval is the stage of a memory experiment in which things are being tested.
The encoding specificity principle (which we've touched on briefly before) states that memory will be best when the context during encoding matches the context at retrieval.
Evidence in favor of encoding specificity:
Godden & Baddeley (1975) had people (SCUBA divers) learn lists either on land or under water. On later recall tests, lists were recalled better on land if learned on land and under water if learned under water.
Eich et al (1975): people learned lists and later recalled them either under the influence of marijuana or not. Performance was worst when items were studied in placebo condition but tested in marijuana condition.
Encoding Variability

Encoding variability principle is the claim that memory will be best when items are studied in a variety of different contexts. The general idea is that by studying material in a variety of different contexts you maximize the number of possible effective retrieval cues. If you study in one context only, you'll remember things in that context but no others.
Distribution of Practice

Massed Practice: Same item presented repeatedly n number of times, followed by next item, etc. (e.g. run, run, run, chair, chair, chair, rock, rock, rock...)
Distributed Practiced: Items presented n number of times, but with other intervening items in between (e.g. run, chair, rock, run, chair, rock, run, chair, rock...)
Generally people have found that distributed practice leads to better memory.