One of the changes that shipped with Lion was something known as “natural” scrolling. As part of the iOS-ification of OSX, Apple completely reversed the direction a person would normally scroll.
For years, scrolling behavior has been thus:
When a user scrolls down, be it via scroll wheel or two fingers on a trackpad, content hidden below the viewport is brought into view. To put it another way, by scrolling down, we are navigating the viewport down.
This is what I am calling interfaced content manipulation. There is no sense of directly touching content, because we are limited to a single axis. There is also typically a separation between the interface manipulating the content and the viewport by which the content is viewed.
A more familiar example of such an interface would be a car:
With a car, we are limited to manipulating our viewport on a single axis, the wheel. When we want the car to go to the right, we turn the wheel to the right. The effect is that content in our viewport actually moves to the left, but we have steered our viewport to the right.
The key thing to consider when we are talking about interfaced content manipulation is user intent. When the user intends to go down, they should scroll down. If the user intends to steer left, they should turn left.
In Lion, a user scrolls in the direction the content is moving. This behavior is exactly the behaviour one experiences when interacting with a touch-based device. When I move my finger up, the content slides along with it. This is what I call direct content manipulation.
With direct content manipulation, there is no disconnect between the content and the interface. Content moves exactly in the direction that the user moves it, and because the impression on the user is one of direct control, intent is no longer a factor.
The problem comes when we attempt to move to direct content manipulation, but we are still using an interface to control our viewport.
Is this state, scrolling down means that the content moves down, but visually we are exposing content that is above the current viewport. The effect here is one of steering the viewport up when you are scrolling down.
This may seem like a trivial distinction and indeed, the brain can be trained to get used to just about anything, but consider applying this logic to a car interface:
In this case if I were to turn left, the car would actually go to the right! By connecting the wheel to the direction of the content, in this case what you see in your windshield, moving the content to the left would have the effect of moving the car to the right.
The reason Apple has chosen to use this method of interaction is that they are moving towards more direct methods of input on their traditional PCs. From larger trackpads on laptops to the Magic Trackpad, there is a case to be made for trying to bridge the gap between what is seen and what is done. Mentally, a user has to “forget” that they are using something detached from their viewport and attempt to interact with content in a natural, direct method.
Those of us who use pen-based input have also experienced this when panning over a document. Even though we are working through an interface, we still have the sense of direct control because our movements are not limited to any one axis.
The takeaway here is that methods of interaction should default to those that require the least amount of mental processing. When our brains sense a limit to our freedom of movement, intent must be considered, but when freedom is granted, so is control.