March 30, 2013

Short shorts, wholesome vs. awkward periods

As part of their early '90s revival, Urban Outfitters is trying to push high-waisted shorts for girls that end just under the butt, AKA daisy dukes. It seemed like it was mostly the chicks working there who were wearing them, whether they were required to promote the product, or whether they're more adventurous than the average girl today in showing a little leg.

Hot pants, dolphin shorts, daisy dukes -- whatever they used to be called, they gave off a fun-loving yet wholesome charm that you don't feel anymore when girls wear shorts. Let's see how they pulled it off by checking out the original Daisy Duke herself:



(Back when girls wore slender heels instead of blocky boots...)

The shorts are cut very high up on the leg, ending under the crotch and slanting upward across the pelvis to reveal the hip bones and some of the underside of the butt as well. Doesn't get any more carefree than that. The trick to them not looking slutty comes from their height reaching up to or over the belly button. The greater surface area of fabric gives the impression that they're covering up more than they actually are.

High-waisted shorts and pants do create a "long butt" effect from behind, their only real downside. I guess in the good old days, guys were more interested in looking at long legs than a plump rump. Or perhaps they figured they'd get to see it all in good time, and for now it was better to get more of a hint or tease from scoping out her legs. Build some anticipation.

Whatever their reasons, shifting focus from T&A to the legs does serve to somewhat de-sexualize male-female relations. The girl doesn't feel as on-display if she's only showcasing her legs, and the guy doesn't feel as self-conscious of his own dirty mind when he's only checking out her legs.

A focus on T&A seems to go more with society-wide anxiety / neurosis, as we last saw during the mid-century. High levels of self-awareness during the Age of Anxiety can be seen in the "sweater girl" wearing a bullet bra, and the pin-up girl sticking her curvy butt out while staring knowingly at the viewer. It's a bit too vampy, pretentious, and obvious.

Over the past 20 years, the focus has returned to the mid-century pattern, a topic I looked at earlier here. Now it's underwire / padded bras and J-Lo / Kim Kardashian booties in yoga pants. And as part of the general change toward a more-covered-up look, shorts end further down the thigh than they used to:


Compared to daisy dukes, these new shorts make the girl look a little less comfortable with herself, which you can confirm by observing their facial expressions. To emphasize the butt, the waistlines have come way down as well (though they've recovered from the Whale Tail days of the early-mid-2000s). Low waistlines don't mean they're actually baring their waist, of course, just trying to draw more conscious attention to their ass. As you can see in a few of the pictures, trying to have it both ways -- low waist and high-cut -- makes the shorts look like a thin skimpy stretch of fabric.

Whether they look more prudish or more slutty, today's shorts lack that carefree wholesomeness of the '70s and '80s, and the broader shift in focus away from the legs and toward T&A has created a heightened self-awareness that's only made guys and girls more awkward around each other.

March 28, 2013

Street Fighter vs. Mortal Kombat in a broader perspective

A reversal of direction in the zeitgeist shows up in so many areas of the culture. That's why it makes sense to talk about a zeitgeist in the first place. To show just how broadly a change in the social-cultural atmosphere can reach, let's take a look at a mass phenomenon in youth culture of the early-to-mid-1990s -- the explosion of video games where two players fight against each other in a best 2 of 3 format. It was comparable in intensity to the Davy Crockett craze of the '50s.

First, the player vs. player genre as a whole was a qualitative change from the '80s, where video games that involved beating people up had both players teaming up together to beat up hordes of enemies controlled by the computer. These included Double Dragon, Final Fight, Golden Axe, and many other quarter-eaters. With greater social isolation, kids weren't as interested in team play, and the player vs. player games sprang up to meet the new demand for anti-social ways of playing video games.

By far the two most popular fighting games were Street Fighter (II) and Mortal Kombat. Street Fighter (1991) had one foot still in the '80s, while Mortal Kombat (1992) was unmistakably '90s and quickly displaced Street Fighter. Contrasting their main features will therefore show how the zeitgeist began to shift. You may recognize some of these changes from other domains, several of which I've covered here as well. It does seem a little frivolous to look for these changes in video games, but in the archaeology of popular culture, I say leave no stone unturned.

Visually, Street Fighter has a more stylized look, and Mortal Kombat a more gritty representational look. Take a look through a whole gallery of screenshots here and here. Notice several things in the comparison below:


Street Fighter looks like the work of an illustrator. The guy getting electrocuted is shown in X-ray view, right out of a kid's cartoon. Mortal Kombat shows digitized "animation" of footage taken of live actors performing their moves. The backgrounds look like digitized photos too. The technology for digitized characters in fighting video games existed earlier, in 1990 with Pit Fighter, but it wasn't very popular. Not until Mortal Kombat.

In Street Fighter, the only effluvia you see is an occasional, cartoony vomit if the guy gets struck really hard. In Mortal Kombat, they try to make the blood look as real as possible, and you see it more often.

This shift from stylized to photorealistic shows up everywhere else in the visual culture. Remember when movie posters and album covers featured illustration rather than photography?

Throughout the game, Street Fighter also shows a broader spectrum of colors, greater use of contrasting colors, and higher saturation levels than the more monochromatic and washed-out Mortal Kombat, which looks like a prelude to The Matrix.

The background environments in Street Fighter are more distinctive: you know you're in a Brazilian rainforest, a Spanish flamenco bar, and so on. In Mortal Kombat, it feels like it's all taking place in a void with a few props thrown in, again like The Matrix. Contrast that with Videodrome, where you get a strong flavor of the city, generally not very palatable. The rise of pure fantasy movies also feel like they're taking place in the middle of nowhere, totally generic, not some distinctive real place that we just haven't been to before.

Mortal Kombat's explosion of gore also puts it squarely in the more unwholesome period of the past 20 years. In the image above, you can see the guy on the left hurling a spear into the other guy's chest, and it's attached to a rope that he's going to use to drag him over in a daze, setting him up for a free cheapshot.

But Mortal Kombat went even further -- everything had to be EXTREME in the '90s -- by adding an element of gameplay that Street Fighter lacked. At the end of your second winning match, it didn't just end there. You were given the chance to perform a special move, called a "fatality," on your helpless opponent. These were so over-the-top, like ripping the guy's head off with the spinal column still attached, blood dripping down, while the headless body slumps to the ground.

This level of goriness heralded the rebirth of mid-century unwholesomeness, which back in those days showed up in comic books. That caused a panic over horror/crime comics, led to Congressional hearings, and ended up with self-censorship (the Comics Code Authority). The exact same course played out again in the '90s, with the panic over violent video games, Congressional hearings, and self-censorship (the Entertainment Software Rating Board).

Come to think of it, we looked forward to pulling off one of these ultra-gory, humiliating fatality moves more than actually winning the best 2 of 3. It was part of the trend away from good sportsmanship and toward that whole "In your FACE, bitch!" and "Suck it!" kind of attitude.

Mortal Kombat II from 1993 slathered thick layers of '90s meta-aware ironic dorkiness on top of the finishing moves. They retained the EXTREME fatalities, and added animalities, where you turned into a fierce animal before ripping them in half or whatever. But now you could perform harmless finishing moves, like turning them into a crying baby, or doing overly cutesy friendship things for them, like cutting out a set of paper dolls to offer your defeated opponent. Huh-huh, I get it.

And then there was that fourth-wall-breaking moment when a digitized photo of one of the game designers, or whoever he was, popped in the corner of the screen to yell "Toasty!" every once in awhile. If you pressed the right buttons then, a special level would open up. The main thing you took away from it was, "Huh-huh, this game is so wacky and zany and quirky!" And lame.

To play well at Street Fighter, you only needed to memorize a handful of button combinations to execute certain moves. But you didn't really need these special moves much anyway. The characters were differentiated and specialized enough in their skills that any one you picked had a natural advantage over at least some of the other characters (like fast vs. slow). Not much memorization or repetition required. With Mortal Kombat, the characters are just about all the same in their speed, jumping, and other basic skills. That required you to memorize all of their special moves to gain the upper hand in what would otherwise be a stalemate between clones.

This shift toward memorization, mindless repetition, and checking off all the boxes on a list (of special moves to master) is part of the broader trend toward OCD behavior over the past 20 years. It got even worse with the sequels to Mortal Kombat -- the only person who could master so many moves in Mortal Kombat II was some geek who spent all his free time alone in the movie theater lobby hunched over the arcade cabinet.

That was compounded by the Pokemon-like proliferation of characters to choose from in the sequels. Gotta master 'em all! The first Mortal Kombat had 7 characters, the sequel had 12, the next had 15, and so on.

Because of its more rule-structured, OCD type of gameplay, kids didn't crowd around Mortal Kombat and get as excited as they did around Street Fighter, whose gameplay was more loose. The group of dudes hanging around Street Fighter were always more in worry-free, hanging-out mode; around Mortal Kombat, they were more in high-pressure, test-taking mode. It's like a bunch of friends having a couple beers while shooting the bull on the front lawn, as opposed to following the rules of beer pong or flip cup, with no interaction. Mortal Kombat is more choreographed, not spontaneous, kind of like the fake-looking fight scenes in the new Star Wars trilogy compared to the original ones.

Street Fighter thus also allowed younger kids to play alongside the older ones. When it blew, I was just 10, but the teenage kids didn't mind me hanging around the arcade cabinet with them. At the mall where I played it the most, there was one guy who could kick just about anybody's ass. Usually I didn't even bother putting my quarter next in line on the monitor when he was there. But a few times I did, and one of those I was this close to beating him.

But with Mortal Kombat and its sequels, fucking forget about it. If there was some nerd who'd memorized and practiced the list of moves and knew who to play against who else, there'd be no way a 10 year-old non-fanatic player would be able to hold his own.

Those are some of the major differences I remember, and that echo so much else in the broader changes underway during the '90s. The Street Fighter craze died off pretty quickly after Mortal Kombat came out. My friends and I still played Street Fighter a lot at home -- that was an easy way out of a slump in the course of a sleepover. But I don't remember any of the sequels coming out in arcades at all; they must have been very limited. I remember some of the variations on Street Fighter II coming out for home consoles, not with much enthusiasm from us kids.

Mortal Kombat kept going and going, though. I clearly remember the popularity of II in the arcades, and the snack shop near my freshman dorm had a cabinet for 4. Not to mention the home versions. The Genesis version of the original was particularly popular because they kept the blood in it, and you didn't see that too much on home games at the time.

Both series got feature-length movies, and Mortal Kombat earned $122 million, while Street Fighter took in $99 million. Mortal Kombat also produced a fairly popular, ear-grating techno song, while Street Fighter didn't. It belonged more to the very end of the '80s phenomenon of the action movie that had an engaging, motivational soundtrack (Rocky III and IV, The Karate Kid, Top Gun, and so on).

Guess my boredom with Mortal Kombat was yet another case of not wanting the '80s to devolve into the '90s. Alternative music, saggy jeans, the Jerry Spring Show... and Mortal Kombat. Some of us only flirted with those things and quickly looked to anything cool from earlier times -- punk music, thrift store clothes, the archive at the local video rental store. The birth of vintage mania, as what was new became so boring, embarrassing, and degrading.

March 27, 2013

Light-skinned blacks smarter than dark-skinned blacks

The General Social Survey data from 2012 are up online. A new question has the interviewer rate the respondent's facial skin color from lightest to darkest on a 10-point scale. To get decent sample sizes, I lumped the lighter together (1-5) and the darker together (6-10). The GSS also has a 10-word vocab test which serves to measure IQ.

The blacks who have lighter skin tones average 5.7 words correct, while the darker ones average 4.6 words. This difference works out to 0.54 standard deviations, or about 8 points of IQ. Those who scored 8 words or more out of 10 made up 15% of light-skinned but only 6% of dark-skinned blacks.

Whites average 6.3 words, so light-skinned blacks are closer to whites than they are to dark-skinned blacks in intelligence. Hence why those with light skin suffer such constant teasing, hounding, and ostracizing from the dark majority of blacks.

We can extrapolate these results to see a future where light-skinned blacks form their own ethnic group and culture, although that may be centuries away. Yet the fundamental split is already visible, as it were.

GSS variables used: wordsum, ratetone, race

March 26, 2013

Public transportation as a tool for social segregation

Contemporary hypocrisy prevents elite members from building a prison-like dome over The Kind-of Sketchy Parts of Town, because that would have a disparate impact against groups whose shortcomings pile up collectively like a garbage heap in those parts of town -- blacks, Mexicans, drug addicts, crazies, whoever.

In the good old days, it was OK to talk about hoods, bums, "the ghetto," and so on. Shoot, if black hoods tried to mug you on the subway, you could just pull out your gun and blow them away, no questions asked. Disparate impact was not a major theme of daily public life, so you could isolate the shithole parts of town and not have to worry about race hustlers and civil rights lawyers teaming up to sue your ass.

Now, in order to build a wall to keep out group A, you have to wall off groups B through Z as well. Those kept-out groups are so all-encompassing in terms of race, sex, income, education, politics, and so on. Really anyone outside your own little neck of the woods. Hence you can't be the target of disparate impact lawsuits, reputation-poisoning charges of racism, etc. You just, uh, prefer a little more local flavor in your lifestyle.

I've only gradually come to understand this by getting around without a car for nearly two years now, which ain't so bad where I live. However, I have noticed that it's practically impossible to reach a range of places I used to frequent in a somewhat well-to-do area. It's not even Georgetown, where very rich and powerful people try to keep the hordes at bay by not allowing a metro line into their area. It's a barely upper-middle-class area with the mainstays of Stuff White People Like -- Whole Foods, Barnes & Noble, large park with joggers and doggie-walkers, etc. Nor is it a racial island, since there's no real race problem in the whole metro area.

If it were rich whites trying to keep out poor blacks, I could understand. But it's more like the SWPLs trying to keep out everybody else, including the middle / upper-middle-class people in my neighborhood, who are also white, educated, law-abiding, liberal, indie-coffee-shop-supporting, outdoors-loving, etc.

On the whole, they're not so different from the SWPLs, except for one thing -- SWPLs organize. They're control freaks, where most people around where I live aren't so micro-managing and engineering toward their own and other people's lives. Like I doubt the parents around here would lobby the school board to ban peanuts, or chauffeur their kid around from lacrosse practice to $100-per-hour test prep sessions.

That trait goes overlooked because it's usually correlated with other, more visible traits like race, income, education, etc. But here where the metro area is fairly homogeneous and egalitarian on those measures, this personality difference leaps out at you. All else equal, the OCD control freaks will take over a society. And the more that trait increases over time, the more we'll see not just this or that area walling itself off, but every one of them. Some weird containerization of half-human cargo.

The most important strategy seems to be getting the out-group to stop driving cars and rely on public transit instead. Remember, the SWPLs are happy to drive their SUVs and Priuses around their community, but if other people drive cars too, they could drive them right into SWPL country. Could we put up toll booths? Too much hassle for us every day. How about a wall with some kind of ID-based entry? Nah, smacks too much of gated community racism. Well then, I guess we could just cripple the out-group's control over their own automobiles.

Public transportation networks do not arise organically, they are centrally planned and hence shaped by whoever has the most influence. In a not-so-third-world-looking place, that means the control freaks. They may have a bus route or two go through their area, but it will not go very far outside, which would expose them to invaders hopping on. And it allows the true believers to ride on public transit without having to sit next to the public.

More, there will be few or no transfer spots near the SWPL stops. It will be hypothetically possible, if you look at the network, to get from point A outside of SWPL country to point B inside. So most of those kept out may not notice so easily, and jihadist activists will have a tough time arguing that the control freaks are aiming for social segregation. The thing is, it's going to take at least an hour and a half for what should be a more direct 15 to 20-minute car ride. And you'll have to make multiple transfers, perhaps even use different modes of transportation, stand around waiting for the next vehicle to show up, exposed to the elements, and winding all around the area.

In short, it's meaningless to talk about "access to public transportation." It's about the accessibility of point B, using public transportation. But by talking about access to public transportation, the SWPL control freaks get a chance to moralistically preen, so concerned for the plight of the common man, while deflecting attention away from the configuration of the public transit network that is designed to keep the common man in his place. It's win-win.

Even better, in the rare event that someone does notice how public transit has been used to achieve social segregation, it makes it look like a policy wrought by bad politicians. SWPL control freak residents thus get a huge cover-your-ass payoff out of the deal too.

The eco-friendly, carbon footprint talk is just rationalizing bullshit. It's all about locking the out-group members onto pre-determined paths that will effectively never allow them into Our Quirky Community. Letting them drive cars is just leaving way too much up to chance for the control freaks to tolerate it. The various measures to raise the price of owning and driving a car are part of the whole thing, but really you want them to just abandon the car altogether, and that means you need to give them an alternative, suitably sugar-coated.

I want to stress that this is not an understandable case of law-abiding middle-class whites wanting to keep out violent black hoods, and just having to take a more indirect route to doing so in these overly sensitive times. Even in homogeneous, egalitarian metro areas, we're going to see the SWPL control freaks colonizing the more desirable areas and walling everybody else out. It is more of a prelude to the disintegration of civil society than to a clash between opposing race-and-class factions.

March 25, 2013

The dulling down of video game graphics

To establish my Real Serious Person credentials, I usually begin a post about video games by declaring that I haven't been a part of that world for quite awhile now, but that they're still worth examining when we ask what the visual culture of the 21st century looks like.

In getting a feel for the zeitgeist, it's a mistake to look at any medium isolated from all others, especially if they appeal to the same sense, like visual media. I've gone over some of the major changes in the visual culture elsewhere, and this post will not be a summary of all that. It will only establish a pattern at the low level. Sometime later I might provide links and general discussion to all of the low-level pieces that fit into the general picture.

No way I'm going to review the graphics of every video game ever made, so I'll try to stick with popular and representative games, what resonated with the general audience. As with movies, still images from one part of a game may not look like another part, and delightful visuals come out more in the playing and animation. But they do give a hint of its visual style. And I'm limited to images that I can quickly find online. So there won't be a Hall of Fame and Shame feel to the images here; just a series that will highlight some obvious differences between then and now.

I won't be including images within this post because they'll take too long to make, and I could be accused of taking unrepresentative shots from the game. Instead I'll mostly link to a screenshot at GameFAQS, which you can open in a new tab, and if you want, explore the rest of the shots by clicking on the light blue tab that reads "Imgs" or "Images". The first two links in each topic will be to games from the 16-bit era and earlier, the second two from the 3-D era.

Striking visuals make use of contrasts, so I'll go through some of the most common contrasts and show how they've become more muted over the past 15-20 years. "Striking" simply means attention-getting and memorable, not mind-blowing.

Bright vs. dark lighting. I don't know if early video games could independently vary both the brightness and the hue (red, blue, yellow, etc.). A simple way to achieve this is to make the background black or another "dark" hue, and the characters and terrain in the foreground white, red, orange, or other "bright" hue.

Such contrasts used to be common, though not pervasive, whereas now when the designers can fool around with the lighting, it tends to be uniform -- all bright, all dark, or all in-the-middle. In almost every screenshot or video clip I've seen of games from the PlayStation / N64 era to today, I can't tell what I'm looking at. Figure vs. ground, background vs. foreground, etc., all just blends in to each other. And the uniform level of lighting is part of that. Nothing stands out from different intensities of lighting. Worse than being uniform, it's usually uniformly dark.

Image - Streets of Rage II (Sega Genesis, 1992)
Image - Super Metroid (Super Nintendo, 1995)
Image - The Legend of Zelda: Ocarina of Time (Nintendo 64, 1998)
Image - Call of Duty: Modern Warfare 2 (PlayStation 3, 2009)

Hot vs. cold hues. This contrast pits the hues closer to red against those closer to blue. Starting again with the PlayStation / N64 era, video games gradually let the full color spectrum fall into disuse, let alone juxtapose opposites. That used to be normal before. This may have something to do with the more naturalistic approach to video game graphics, but I don't think that's the main reason. Why not have the characters wear clothes with strong color contrasts? They're not only trying to look naturalistic but also contemporary -- and people dress in such drab colors today. Same with architecture, cars, devices -- no colors, let alone contrasts. So here we see video games directly reflecting the larger visual culture.

Image - Mega Man 3 (Nintendo, 1990)
Image - Sonic the Hedgehog (Sega Genesis, 1991)
Image - Resident Evil (PlayStation, 1996)
Image - Grand Theft Auto IV (PlayStation 3, 2008)

Shallow vs. deep staging. By trying to attend to too much detail across a deep range of distances, we get lost. The most striking paintings do not exploit deep 3-D perspective very much -- that diffuses our attention across too much space. Restricting the number of distances that we're supposed to take in makes it much easier to focus on the important stuff, and perhaps build up tension.

Nothing required a switch from 2-D to 3-D graphics in video games. In fact, a number of 2-D games were made into the late '90s on home consoles, and later into the 2000s on handheld systems only. (Although even there, the new Nintendo 3DS shows that soon even handhelds may show mostly or only 3-D graphics.) Rather, audience tastes changed during the mid-'90s, explaining the widespread adoption of consoles with 3-D graphics.

I found them boring to look at -- like Atari in 3-D -- and didn't bother with most games after then. Earlier games made use of two planes only -- one up close where all the action was happening, and a distant background to set the atmosphere, not always with much detail (perhaps just a monochrome light blue plane for "the sky").

Image - Zelda II: The Adventure of Link (Nintendo, 1988)
Image - Super Castlevania IV (Super Nintendo, 1991)
Image - World of Warcraft (PC/Mac, 2004)
Image - Assassin's Creed: Brotherhood (PlayStation 3, 2010)

Exotic vs. familiar imagery. All familiar looks too everyday, not particularly worth attending to, while all exotic looks so remote as to be irrelevant to here and now. Combining both gets your attention -- you weren't expecting the one world and the other world to bleed together. This tends to employ contrasts across different geographical regions, but could also be across time, or both.

In contrast to earlier games, recent ones are split into either familiar settings and imagery or fantasy settings and imagery, with little of the eclectic here-meets-there style. You see the same thing with the look of movies, either the pure remote fantasy of Lord of the Rings, Harry Potter, and Avatar, or the pure familiar realism of Saving Private Ryan, Spiderman, and Twilight.

Image - Forgotten Worlds (Arcade, 1988)
Image - Strider (Arcade, 1989)
Image - The Sims (PC, 2000)
Image - The Elder Scrolls V: Skyrim (PC, 2011)

Large vs. small figure sizes. Most video games today don't even place human-sized characters against gigantic natural or architectural environments to create a sense of the sublime. Occasionally they do, though, since that is allowed under the naturalistic approach. Earlier graphics contrasted the size not only of the character and the environment, but of the characters themselves.

Typically that was saved for tougher enemies and for "boss fights" at the end of a level, where the thing you had to fight to clear that level would often be twice your size or larger, sometimes filling a good chunk of the screen. The main thing needed to create tension and give you a sense of achievement after defeating the boss is just making them hard to kill -- stronger attacks and a tougher defense than the enemies you're used to. There's no reason the boss has to be huge, but it amplifies the tension for the largely juvenile audience that video games are made for.

Generally you only see figure size contrasts in the pure fantasy games today, which aren't a very popular genre. In the late 1980s, even a boxing game like Mike Tyson's Punch-Out featured opponents who were more than twice your size.

Image - Altered Beast (Sega Genesis, 1989)
Image - Mega Man 2 (Nintendo, 1989)
Image - GoldenEye 007 (Nintendo 64, 1997)
Image - Left 4 Dead 2 (PC, 2009)

Saturated vs. dull colors. In this case, striking visuals come from picking one side rather than juxtaposing both in a contrast. Rubens' paintings are more striking than Rembrandt's because, even though both may exploit light-dark contrasts, the same hue will be more vivid or full in Rubens' and more muted or subdued in Rembrandt's.

Since the PlayStation / N64 era, video game graphics have taken on a totally washed-out look, which compounds the problems of having uniform lighting. It makes it even more difficult for the important things to stand out from the less important ones. Earlier games did not have a pre-school level of super-saturated colors, but were still more in that direction, giving them a more vivid look.

No additional images for this one, since you've already seen the washed-out vs. vivid look in the ones above.

Repeated design motifs. Also not a contrast, but another hard-to-miss difference between earlier and more recent graphics. Repeated design motifs make an image more memorable (catchy), at the expense of naturalism. Their presence gives earlier games a more primitive tribal art / crafts kind of look, more ornamental and geometric than purely representational. And given that video games can't (yet?) reach the representational level of painting or photography, they might as well play up the geometric look, like jewelry, clothing, "products," furniture, and architecture.

Image - Contra III: The Alien Wars (Super Nintendo, 1992)
Image - Sonic CD (Sega CD, 1993)
Image - Super Mario Sunshine (Nintendo GameCube, 2002)
Image - Batman: Arkham Asylum (PlayStation 3, 2009)

Video games are squarely middle-brow and for juvenile audiences, but that doesn't mean they don't count in a larger account of the visual culture. Their increasingly dull look since the mid-1990s has deprived young audiences of what could have been something exciting to look at, comparable to the concurrent decline of poster art, album cover art, and book illustrations.

And of course there's the chicken-and-egg problem that young people these days have much more boring tastes to begin with than their counterparts of 20 to 30 to 40 years ago. Sometime soon the next crop of youngsters will have somewhat cooler tastes, and it will be interesting to see what video games will look like as they shift back toward accommodating audiences who want striking visuals, but this time around with a richer set of technology at the designers' disposal. Doubt I'll be enjoying them personally, but it'll be heart-warming to see young people once again getting into something fun rather than dull.

March 24, 2013

What icons of the '80s should stay in the '80s?

The greatest decade since the '20s was not without its faults. In the interest of painting its portrait warts and all, what would you include?

The low-fat / low-cal obsession. It wasn't as bad as now, but that was when it started to go mainstream. File vegetarianism under this theme too.

Jogging craze, running shoes, "performance" shoes / clothing / accessories in general. I know jogging began a little earlier, but the whole Nike Air, biker shorts as everyday attire, etc. thing got going then.

Bodybuilding. Nowhere near today's level of bro-science, but still something newly mainstream back then.

Working out at the gym, as opposed to doing athletic activities. Again, just getting started, but still something unmistakably on-the-rise. I don't mind women going to their aerobics classes, since that was more group-interactive, and women aren't going to be playing team sports or doing much athletics anyway, so no huge loss. But guys doing all those repetitive isolated movements, mostly in isolation from other people, is a step down the ladder.

Those are just off the top of my head, I didn't plan to focus only on health / exercise. But that does seem to be one area where we can find roots in the '80s of today's weirdness. Focusing on "health" mostly means personal, not public health, and invites OCD thinking and behavior, touching so easily on taboo foods, well known cures and routines that can wave it away, though only with frequent repetition.

These trends therefore went against the decade's overall spirit of carefree and unpretentious living, and don't ever need to see a revival.

March 20, 2013

Ethnic consciousness began with archaic-modern human encounters

A conscious awareness of your own distinctiveness relies on a contrast with whoever it is that you're distinct from. Earlier I tossed around some of Julian Jaynes' ideas about the origins of self-awareness, but before that there was group awareness. Where did that come from? It must have required encountering an Other or out-group that was so strikingly different that it sparked an awareness of the in-group's distinctiveness. For the first time, we sought answers about what made them, them -- and us, us.

And this encounter does not create symmetric effects in each of the two strikingly different groups. Typically the groups are not equal in size and local dominance -- one is large and established, the other is small and upstart. Individuals of the established group do not need to band together to maintain their position, while those in the upstart group do. They're just eking out a living, and small groups are more likely to go extinct from random dumb luck alone.

The greater solidarity that grows within the small upstart group allows them to slowly take over the formerly large group and become the new establishment, waiting to be displaced by the next wave of upstarts. Again, this happens most intensely when there is a strong Us vs. Them contrast, not just any old groups who are small vs. large in size. They look different, talk different, act different, eat different, think different.

These are the basic ideas behind Peter Turchin's concept of the "metaethnic frontier" or faultline between two societies, which breeds greater group awareness and solidarity among the initially weaker group. That imbalance in the growth of solidarity is one of the key differences that eventually leads the expanding group to grow complacent, and the expanded-upon group to band together, push back, and perhaps take over the formerly dominant group. His lively, readable book War and Peace and War develops these ideas to account for the rise and fall of a range of societies and empires.

To keep the model from growing unwieldy, Turchin focuses primarily on societies that are beyond the hunter-gatherer way of life but still pre-industrial. This includes agrarian and pastoralist groups. Generally the story is one of an expanding large group, and a small group that bears the brunt of that expansion -- the Celts who fanned across Europe, squeezing the Romans in the Italian peninsula, or the Persians and Arabs who squeezed Byzantium from the east, and the Slavs, Avars, and Bulgars who squeezed it from the west. These pressure cooker conditions caused the besieged groups to stick together more toughly and cultivate a strong group identity, ultimately leading to Roman and Byzantine dominance over the former giants, who had begun resting on their laurels.

If it happened that way in historical times, why not in prehistorical times as well? Before agriculture, though, human hunter-gatherer groups would not have been so strikingly differentiated as they would be when the nomadic pastoralists from the steppes clashed with the sedentary agrarian civilizations of Eurasia. Today the pygmy hunter-gatherers occupy a somewhat different niche in Africa than the Bushmen hunter-gatherers, but the contrast is nowhere near as stark as Mongolian vs. Russian or Roman vs. Arabian. And with transportation limited to walking, hunter-gatherers from Africa, Australia, Asia, and Europe would never have run into one another.

But back in those days, they could do better than Mongolian vs. Russian -- they could do human vs. Neanderthal, human vs. Denisovan, and human vs. a mystery archaic species in Africa. We know that anatomically modern humans existed alongside the archaic hominins for awhile because the moderns show genetic evidence of having picked up genes from the archaics. Where environmental conditions have been favorable and where fieldwork has been extensive, archaeology also shows overlap of modern and archaic groups in Europe and western/central Asia.

And we know that the contrast would have been huge because when humans first encountered the archaics, it was in a setting that we were not best adapted to. They had gotten there first, adapted pretty well, and established themselves. We were just a ragtag bunch of newcomers shaped to fit Africa. Even in Africa, the modern-archaic encounter must have been outside our normal habitat range up to that point -- because someone else was already there.

To human beings who had just wandered into a strange new territory, the entrenched archaics must have seemed disturbing and unsettling -- not only their looks but their sounds, their social behavior, their hunting tactics, everything. Their whole being was shaped to fit a different environment than ours was. Now, if a species seems too different from us, like if humans see a pride of lions, we don't feel a keen Us vs. Them antagonism (not at a safe distance, anyway). They're obviously not us, but they're not really in the same game as us, otherwise we'd both have similar adaptations, including those evident to our senses.

But if something looks and behaves more like us, they're probably fashioned for a similar way of life as we are -- and that means they're our direct competitors for territory and resources. Of course, if they look identical to us, they're probably from our own small group, perhaps even being close kin, and we'll feel warm toward them and treat them all right.

It's where they just cross that perceptual threshold of "looking kinda like us" -- but no further -- that we would've felt the strongest sense of Us vs. Them. It may not quite have reached the atmosphere of "Holy shit dudes, I think we just wandered onto the Planet of the Apes." But they would have seemed extremely weird, twisted, and unsettling in our eyes, if not in the eyes of a Martian biologist, who would see the two groups as we would see cheetahs and leopards.

Beholding not only a single individual but an entire population of humanoid creatures, and observing that they are thriving in this land rather than drying up like freaks of nature should be -- we are forced to ask ourselves what makes them, them, and us, us. They have some weird inner essence, so we must have a normal and healthy inner essence. That binds all of Us together, against Them.

Where did our unique, healthy, strengthening essence come from? Responses could have varied, but a totem animal was probably the main answer. Whatever path they chose, though, this is how creation myths, origin narratives, and so on got their beginning. Not to mention the visual and material creations that reinforced ethnic identity and made the origin myths more palpable. (In a later post, I'll focus in more detail on the birth of prehistoric art as an outgrowth of modern-archaic encounters.)

The main difference between these prehistoric origins of ethnic consciousness and the post-agricultural pattern is that in the past 10,000 years, it was the larger dominant group who expanded at the expense of a tiny besieged group, whereas in prehistory it was tiny tribes of explorers who secured a toehold in somebody else's territory. We felt like, if we want to hold on here, when they are so numerous and well established, we'd better stick together.

Once it became clear that the moderns and archaics were pursuing fairly similar niches, we realized that this town isn't big enough for the two of us. Then our greater ethnic consciousness emboldened us to take them over, whether by displacing them from the territory and letting them starve, or killing them outright. It wasn't our greater intelligence or fancier tools that killed off the archaic humans -- it was our team-fanatical social psychology. We were in it to win it.

March 19, 2013

Great Southern Land

All this talk about primitive man -- how could I have forgotten to post this video?



Ancient hybridization in Africa between archaic and modern humans

I've been putting together a theory about the origin of prehistoric art (and in a way, of all art), but one thing kept bugging me. The theory required there to be an archaic form of the genus Homo in Africa that Homo sapiens would have run into, possibly interbred with, sometime between 20 and 40,000 years ago.

I don't stay as up-to-the-minute on human genetic evolution as I used to, so I went to check, and there it was: "Genetic evidence for archaic admixture in Africa." The interbreeding probably occurred around 35 to 40,000 years ago, with some other group that had split off from Homo sapiens around 700,000 years ago. Well goddamn. Better reading it a year and a half late than never, I guess. Two follow-up articles, which I have not read (here and here), all by separate teams, confirm that hunter-gatherers in Africa show signs of their ancestors having interbred with some archaic form of human (i.e., not Homo sapiens) tens of thousands of years ago.

Correction: the last article linked only discusses admixture among various sapiens groups in Africa, not with archaics as well.

This is analogous to the sapiens-Neanderthal and sapiens-Denisovan interbreedings, whose signs show up in the genomes of living people today. (Denisovans were closer to Neanderthals than to us, though still distinct from each other.) Those interbreedings happened in Eurasia, while the ones reported above happened within Africa.

In a round-up of some of the main implications of the first article, Dienekes Pontikos wondered:

It is certainly counterintuitive that admixture with archaics would have happened in Africa after it happened in Eurasia. . . It is difficult to believe that Homo sapiens waited 160 [thousand years] to mix with his archaic neighbors in Africa... and yet started hooking up right away with Eurasian archaics.

My hunch is that Homo sapiens was already pretty well suited to Africa, no matter which of the many diverse environments they found themselves in. So, while this mystery archaic Homo must have been locally better adapted than sapiens in some way (hence why we picked up and kept some of its genes), it wasn't a huge difference. Not so pressing of a need to get their genes.

Once sapiens left Africa, though, all bets were off -- we needed all the help we could get, and fast, by picking up whatever genes the Neanderthals had that adapted them to the strange new world that we had stumbled into. It was colder, had a different mix of infectious diseases, etc.

Hopefully I'll be able to throw together a post soon on what this all has to do with the birth of art. In the meantime, just mull over the idea that hunter-gatherers from sub-Saharan Africa aren't a 100% window into our primeval past. Some things unique to them, not shared with hunter-gatherers elsewhere in the world, could have come from the mystery archaic species in Africa. I wonder about linguistic clicks (or something related to it). Or "steatopygia," i.e. having a butt big enough that your baby can stand on it when you're upright.

March 18, 2013

Paleo footwear

At the end of the summer I decided to try out running shoes as everyday footwear for the first time in I can't remember how long -- maybe ever. Got two pairs of ASICS '81 or something, mostly because looked nice and felt comfortable in the store. Right away I noticed how cushion-y they felt, but thought that might be a nice change of pace since I walk so much on paved surfaces.

Something else never felt quite right about them, though it was hard to tell without switching back to uncushioned shoes. Your leg muscles don't seem to do as much flexing or whatever when you're walking around in running shoes, or other shoes with lots of padding (seemingly most shoes these days). Especially the muscles on the back of your thigh. I think they've gotten smaller over the period of wearing cushioned shoes.

Before that, my sneakers were slightly upscale Chuck Taylors, and your feet feel the ground more in those. More of a grip against the surface, and your muscles pushing you off of it. Yesterday at Urban Outfitters I saw some bright yellow canvas sneakers, and figured they were worth $18. Hardly any cushioning at all.

Man, I'd almost forgotten what it felt like for your feet to really contact the ground, the back thigh muscles flexing, and your toes gripping the surface and pushing you forward. I don't think I'll be wearing cushioned, arch-supporting shoes ever again.

Here is a great article (with references) on how modern athletic shoes cause running injuries. Injuries are in the tail of the distribution; presumably the bulk of the others suffer from milder screw-ups that don't land them in the doctor's office. Here is a report on a more recent study about running barefoot to regain a normal healthy gait or stride, not landing so squarely on the heel. And here is a report of an article about shock-absorbing shoes being bad for people with arthritic knees.

It's a familiar theme -- trying to pad the body prevents hormesis, where experiencing lots of little and not so little stressors signals our body to improve itself to meet these challenges. Take away stressors, and your body starts to atrophy. When it inevitably does encounter a stressor, it'll be too much too soon, and the system will be overwhelmed.

Imagine someone who, for years and years, never lifts or carries anything of substantial weight. Now give him a 50-pound load to move from point A to point B, hardly an impossible task -- unless he's weak from inexperience. Probably won't be able to, or he'll be straining too hard if he does.

With cushion-y shoes, the information about the impact of the ground doesn't reach our nervous system, being wiped out by the shock-absorbing materials. So we underestimate how much impact we're truly experiencing, leading us to walk or run in a more heavy-footed way. A weaker system that ultimately gets put to a greater stress means it'll get overwhelmed, whether the injury is mild or severe.

Before, I showed that a rising crime rate causes stronger bones, and a falling crime rate weaker bones, using data on the incidence of forearm fractures across four decades. Rising-crime times feature more rough-and-tumble play, especially in the outdoors (and up in trees), and that spurs development. Young people are so fragile today because they didn't get banged up at all as children, so their bodies never got the signal to build up stronger bones to compensate.

I bet we'd see something similar with foot defects like having flat feet. (I searched for studies on incidence over time but couldn't find any.) South Asian children who don't wear shoes hardly develop flat feet at all, while those who wear shoes do get it. The arches of the foot develop in response to walking and running barefoot on the ground. Yep, I remember that.

Back in the '80s children mostly wore shoes if they had to go to the store with their parents, and even then they weren't cushioned. (I know because about a year ago my mother showed me all of our baby/toddler stuff that she's kept, including shoes.) Around the house, running around the back yard, at the beach, at the park, wherever -- it was acceptable to not wear shoes or even socks. It must have been uncommon, but I still vaguely remember occasionally walking around stores barefoot.

And we always took our shoes off on a car ride of more than 30 minutes or so. Somehow it's become rude and crude to have your bare feet touch any part of the car, even hanging out the window, as though you were propping your feet on some imaginary coffee table.

I'd mostly been following the minimal footwear thing ever since, and it sounds like it's time to go back. No more moon bouncy shoes.

March 17, 2013

Recent anxiety about credibility in everyday language

An earlier post detailed the recent rise of all sorts of slang words which presume that nobody else is inclined to trust you, so you have to insert these overt declarations that you're not lying to them. Most revealingly, "I'm not gonna lie." It's like everybody thinks that their peers are going to judge them as the boy who cried "wolf."

This preoccupation with establishing your trustworthiness includes a range of other widespread qualifiers. They all have the common theme of speaking as though you were a witness testifying before a skeptical jury. At the outset, you labor to establish your credibility, as though you're an expert or in a special position of knowledge and experience. When you lack convincing expertise yourself, you appeal to someone else's authority. And when all else fails, you phrase bare opinions as matters of fact.

It's tough to investigate how this behavior has changed over long periods of time because the particular phrases that people use for these purposes may differ across time. So I've kept only phrases that are generic, formulaic, and enduring, rather than specific buzz-words that may fluctuate more according to fashion. Even formulas come and go, but on longer time scales. I'll be looking here just at changes within the past couple decades.

Also, everyday speech generally does not get written down, so I'll look at what's stored in the media, and journalistic sources rather than books in general. They tend to have more opinion pieces where the writer is speaking personally, or quotes from people offering their opinion, than books do. The New York Times will serve as a snapshot of the broader society, but I've also consulted the Harvard Crimson to get a closer look at youth culture, which tends to be a greater source of innovation (for better or worse).

Finally, isolated cases do not establish a broad or lasting trend. A trend needs to continue itself. Occasionally I've mentioned where there are isolated cases, to show that the phrase existed before the trend began, but was not a mass phenomenon.

First are prefacing statements of the rough form, "As a ___, I...." The "as a" clause seeks to establish credibility, and the second clause is the opinion, piece of advice, etc. The search engines of the NYT and Crimson won't let me do that, and I'm not going to write a program that does. So I've looked at three examples that fill in the blank, and only where they're used to set up another clause -- the one we might otherwise ignore or dismiss. Not where they're used as descriptions ("he will be remembered as someone who cared deeply for his friends").

As a long time... This is an obvious appeal to greater experience than the average person has. It first appears in the Crimson in 1997. The NYT shows isolated cases in 1987 and '89, but is not part of a trend until 2000.

"As a long-time advocate of pre-registration for classes, at the beginning of each semester I try to rebel against shopping period as much as possible." (Crimson)

"As a long-time student of human development, the whole issue of what makes for a good education..." (NYT)

As a person who... Again, the speaker has special experience or qualifications. The Crimson shows an isolated case in '79 but does not get going until 1993 (further cases show up in 2000, '08, '11, and '11 again). In the NYT, it begins in 1992.

"As a person who had a sister who was a police officer and who was murdered, I would certainly not endorse or condone violence of any kind ..." (Crimson)

"As a person who has lost family members to lung cancer, I'd want to know before donating whether a charity takes tobacco money." (NYT)

As a believer in... This shades into the appeal to authority, namely the soundness and trustworthiness of whatever it is that you're a believer in. The Crimson has an isolated case in '71 but doesn't get going until 1992. For the NYT, it is 1987, the only example from the '80s (and then, the late '80s).

"As a believer in original sin, I figure that goal is never met, but you can move toward it, I suppose..." (Crimson)

"As a believer in the value of advertising, I marvel that such indifference exists in this region." (NYT)

X famously said... Unfortunately I couldn't use phrases like "X himself said..." to study the appeal the authority. People generally don't phrase it so bluntly; they take more time and elaboration when citing authority. Rather, "X himself said..." is used more to make it sound unexpected or contradictory (like "Snooki herself said there's such a thing as too much tanning"). When people want to introduce a belief, opinion, etc., that we're supposed to buy for authority reasons, the most succinct way they phrase it is "X famously said..." It first appears in the Crimson in 1999, and in the NYT in 1995.

'Herbert selects the very topics that demand linguistic self-consciousness, save that topic of genocide and terror which Adorno famously said would make "all lyric after the holocaust...barbaric." ' (Crimson)

"Oscar Wilde famously said that all bad poetry is sincere, and the same could be said of bad painting." (NYT)

It is the case that... Now we get into phrases that just deny that there's anything up for debate. Nobody uses it for factual statements, like "It is the case that the sky is blue on a clear afternoon." This sleight-of-hand shows a preoccupation with your credibility, only deflecting any possible concerns as Objectively False. And it differs from merely stating an opinion without a qualifier, where any non-autistic person can tell it's an opinion, and instead prefacing it with a "this is the Objective Truth" clause. This phrase first appears in the Crimson in 1994, and the NYT in 1998.

"Clearly it is the case that a vital goal of many course[s] is the building of a body of knowledge making analyses possible..." (Crimson)
[Bonus points for throwing in "clearly" and "vital" as well. What a dickless retard.]

' ''It has been the general view -- and I think it is the case -- that Delta has been weak in the marketing area,'' Mr. Mullin acknowledged.' (NYT)

It is clearly... As above. The Crimson has an isolated case in '73, and gets going in either 1991 or '94. In the earlier example, it sounds like the "clearly" belief isn't so much of a subjective judgement -- whether a building is well-crafted is more of an objective thing. He's using that damning by praise to introduce a more subjective assessment -- that the building's context is disregarded. The later example is clear, though. The NYT has too many examples too inspect to see if they are used in the intended way, and chart the change over time.

"It is clearly a very well-crafted building," Eck says, "but the context is almost totally disregarded." (Crimson, 1991)

'When, on their last LP Check Your Head, Mike D raps "I live my life just how I like to," it is clearly more than commonplace rap bravado.' (Crimson, 1994)

It is obviously... It first appears in the Crimson in 1997. As above, the NYT has too many examples to inspect.

"There has been only one African-American general manager in the history of the game [of baseball]. It is obviously not a question of quality since this general manager, Bob Watson, helped build the World Series winners in New York." (Crimson)

I'm sure there are other phrases to look at, but this gives a pretty good view of the pattern. The trend is confined to the past 20 years. It was in some kind of trial-and-error stage in the late '80s / early '90s, and took off after that. How this all relates to broader social trends was already discussed in the post on "trustworthiness" slang.

March 16, 2013

Do the "first three years" matter for social development?

Psychologists, parents, the government, private donors -- just about everyone believes that a child's earliest years are the most important for their development.

Rather than address that on theoretical grounds, let's cut right to the reality checks. I'm sticking with social-emotional development since that takes a back seat to cognitive development in the broader culture. Parents are if anything eager to teach their kids how to read, do arithmetic, and so on. But they do everything they can to shelter them from the extensive peer contact that drives social development away from childish egocentrism and toward more mature give-and-take relationships.

There is far more variation across the historical record than can be reproduced in the typical lab experiment, unless it's got funding out the wazoo. And because so much of history is cyclical, each phase of the cycle is shot at replicating the original result. I'm going with very recent history because that's all still in people's minds = less work.

I can get along socially with people from my parents' cohort (1954 and '55), all the way up through my own (1980), my brothers' ('82), and even into those who were in high school or college with me when I was a senior ('83 and '84). Reaching back into the earlier Boomers, things get chancier, and I sense a sharper divide between them and the youngest Silents (mid-'40s births). And this has been true all my life, whether folks in my parents' cohort were 30, 40, or 50, whether the Silents were 40, 50, or 60, and whether my peers were 10, 20, or 30. There are very strong cohort or generational affinities that last over time.

What is shared across all those cohorts who I can more or less relate to is growing up in a rising-crime period, the main determinant of the zeitgeist. Now here's where we can test ideas about how crucial the first 3, 4, or 5 years are to a kid's social development.

If children imprint so heavily on their environment up through age 4 or 5, people my parents' age should show a pronounced stamp of the 1950s culture, but they do not. It's not simply that they show a little influence of the '50s, though more of the '60s, which they spent more time in. They just don't show that '50s impression at all.

Earlier Boomers (late '40s / early '50s births) kind of do, so kids must start imprinting on their social environment for real around the time they head of for elementary school. We can't tell whether that's some universal time schedule programmed into human nature, or an artifact of our society sending them into heavy peer contact at that age. But in our society anyway, that's when kids start to really absorb cues about what social world they're going to have to fit themselves into.

At the other end, there are the Millennials born in the second half of the '80s who come from outer space, social-wise. The first cohort that feels totally Millennial is the '87 births, although '85 and '86 are still out there. They spent their first four years in the good old days, granted the twilight of it, but still. If they imprinted early, they'd look like those who were children in the earlier portion of the '80s.

Yet they're more awkward, closed-off, glib, dismissive, and repressive in their social-emotional ways. Way more than people born in '82, who I know a lot of from my brothers and their friends, as well as younger people I knew / hung out with in high school, college, and today. The '82 cohort is 5 years apart from the '87 Millennial cohort, so go forward another 5 years to see if it's merely an age gap problem.

Nope, people born in '87 and '92 are remarkably similar to each other, probably some subtle differences, but clearly Millennial (someday we'll be saying "early Millennials"). The '92 Millennials grew up entirely within falling-crime times, whereas the '87 ones got to live through toddlerhood during rising-crime times, yet their outcomes are not worth splitting hairs over. Likewise, in the previous great generational divide, there's a much greater difference between '43 and '48 births (Silent vs. Boomer) than '48 to '53 births (both early Boomers).

If social development doesn't seem to really get going until age 5 or 6, when does it end? I'm tempted to say it's like language, an obvious rather than a lazy analogy in this case. So somewhere in the early half of adolescence, probably no later than 16 or 17.

If that's right, then we should see a perceptible though not yawning chasm between those who were born from the mid-'50s through the first half of the '70s, vs. those born in the second half of the '70s and first half of the '80s. The first group spent their entire formative years, 5 to 17, in rising-crime times. They obviously have other differences, but not too much in terms of fundamental social skills and behaviors.

Once you get to those born in the second half of the '70s, they're mostly inoculated from the contagion of '92 and after, but not entirely. They spent some sensitive middle adolescent years in that period. Most of their growing-up period told them to develop in one way, and suddenly they got the message to develop the opposite way.

And that semi-awkwardness should show a gradual increase up through the '84 births, who got up through their "middle years" of childhood in the good old days, but spent even more of their crucial years in the atomized era. That sounds about right. The later '70s births do show a stronger imprint of '90s sarcasm, irony, etc., than the early '70s births do. Much stronger. Not that they pioneered it, but that they've internalized it more. Ditto the early '80s births.

With Millennials, it's not semi or quasi-awkwardness, it's full blown. Unlike the gradual cline that we see at the upper end of the window, the lower end shows a sharper divide because children are apparently not very tuned-in socially before age 5. They're still interacting only with close kin, perhaps still being breast-fed. So whether you got in 1, 2, 3, or 4 years before things started going downhill, you all end up the same because all were equally shielded from the larger social stirrings.

Most of this seems on the right track, though certainly I could be off by a couple years at the lower or upper end. The main point is to show how much you can figure out just from review of recent history, though of course it's better to go back to another generational cycle and try repeating the results. I do think you see the same big split between early '20s and late '20s births that you see with early '80s vs. late '80s, only then it was Greatest Gen vs. Silent Gen. But that would be for another post.

March 14, 2013

Early '90s thrillers as a sign of the cocooning shift

Just saw Sleeping with the Enemy, a somewhat watchable example of the paranoia thriller genre that was so pervasive in the early 1990s.

In the paranoia thrillers of the mid-'70s, the danger came from conspiracy and corruption at the elite level. The fact that such a genre was so popular then speaks to how little trust there was in The Establishment -- whether it was elected politicians, police departments, corporations, or the local business power players. The '80s thriller would add a source of danger that was supernatural, not-quite-human, or otherwise outside of the circle of normal people you know and trust.

In either case, the fact that the danger is coming from some outside group, even if they're physically near your in-group, raises the audience's awareness of how they need to be more trusting of others from their group who reach out to rely on them, and to not feel awkward in reaching out to them as well. Whether this mutual support ultimately gets them out of harm's way or not, they have no chance if they go it alone.

In contrast, the early '90s thrillers are designed to make you fear your own neighbor, spouse, and best friend. And the fact that they saw such great success at the box office shows that average Americans had already started to withdraw their trust even from those closest to them.* Their protagonists may get a little help from true friends, but mostly they survive by going it alone, having been burned so badly by betrayal.

During this early stage of social atomization, the movies don't make a virtue of total self-reliance. People in the audience have just started to perceive everyone else as untrustworthy, and are seeking validation of their desire to cut themselves off. It's easier to rationalize that decision if you see yourself in the place of characters who are isolating themselves more out of urgent necessity than as some kind of perfectionistic test of self-sufficiency.

Later into the '90s, the plummeting trust levels are felt to be something here to stay, not news anymore, so they're just accepted and part of the background of thriller / noir type movies.

The early '90s was therefore a divorce period, where people felt that a social bond that had been going on for quite some time needed to be severed. You exaggerate how awful the other side is in order to make it easier and faster for you to cut the ties. Once you're split apart, though, you can calm down from the heated, accusatory phase, and move on in your own separate ways with the understanding that you're never getting back together again. Lack of fellow feeling is now taken for granted, and if you do float the idea of re-connecting, you'll be reminded that the togetherness is over.

That's more or less where we are now, and sometime in the next 5-10 years I think we'll see people turning around and saying, Maybe we were too uncharitable toward each other -- it really did use to be more fun and rewarding when we were together, wasn't it? And the cocooning period will change course as it did before in the late 1950s, after the Age of Anxiety and drive-in restaurant atomization.

I don't feel like reviewing the movies themselves, let alone from both the '70s and the '90s. It seems like the '70s paranoia thriller is a well understood genre already. But it is worth drawing attention to the phenomenon of early '90s paranoia, since you rarely hear about it -- not wanting to re-live the bitter divorce phase, I guess. I suspect there's a similar string of movies from the early-mid-'30s during the divorce from the Jazz Age, and before the calmed-down "separate ways" phase of the '40s and '50s.

At any rate, here's a quick list with a few comments. Again these hit thrillers have to center around betrayal by someone socially close, and the tendency toward "going it alone" by the protagonist. Hopefully this is close to exhaustive, going over the top 100 at the box office for each year, and judging either from memory or a plot summary if the title sounds suggestive.

Presumed Innocent (1990). Not a strong example because we don't even learn about the betrayal until the very end, though the audience still takes the message away with them afterward.

Sleeping with the Enemy (1991). A melodramatic blow-up of the hysteria about date rape / wife-battering.

Deceived (1991). Haven't seen it or even heard of it before, but it was #47 at the box office that year, and the plot summary fits the mold.

A Kiss Before Dying (1991). Haven't seen it either, but it was #75 and sounds like it fits too.

The Hand That Rocks the Cradle (1992). Never saw it. Shows the early days of helicopter parenting, exaggerating the threat posed by any kind of "allo-mothers."

Unlawful Entry (1992). Not an anti-establishment movie, even though the villain is a cop. His job doesn't tie into a theme of corruption or conspiracy -- it's just what he happens to do.

Single White Female (1992). 'Nuff said, as they used to say.

Consenting Adults (1992). Haven't seen it.

The Fugitive (1993). Of all in the genre, definitely made the "trust no one" point the strongest. Not because so many different people were in on the frame-up, but because it had the greatest emotional force. As with Presumed Innocent (also starring Harrison Ford), we don't even learn who the close betrayer is until very late.

The Firm (1993). Not an anti-establishment movie, even though the danger comes from a group of law firm employers. They lured the protagonist away from an East Coast Establishment job by gaining his trust, unlike a faceless corporation or bureaucracy.

Malice (1993). Haven't seen it.

The Good Son (1993). Kind of laying it on thick by having the kid from Home Alone play a psychopathic family member.

So I Married an Axe Murderer (1993). A more comedic, ironic take on the genre.

The Crush (1993). If you were a red-blooded male born in the late '70s or early '80s, 1993 was all about Alicia Silverstone. Now there's an axe murderer we wouldn't mind marrying.

Disclosure (1994). Can't remember this one too well, but might fit. Looks like the last desperate attempt at the genre.

After the single entry from '94, I find nothing from '95 or '96, meaning I'm not going to bother with years after that. This review really makes you appreciate how quickly and therefore intensely the divorce period was. In movies, it gets going in '91, picks up during '92, takes it to the max in '93, and is basically over with by '94. The mid-'90s was when apathy among young people and complacency among middle-aged people began to set in, no more fever pitch.

Which of them are worth seeing? My friends and I watched The Crush a lot on video / cable TV, but that was more from our brains drowning in hormones. I remember enjoying The Fugitive each of the 3460 times I saw it that summer, but haven't seen it in awhile. And as mentioned, it doesn't dwell on the theme of betrayal by someone close like the other movies do, saving that for the end. Not as much of a downer.

* The General Social Survey shows that trust levels have steadily declined after a local peak in the late '80s.

March 12, 2013

The Parallax View and the Sublime visual style

Finally tracked down a copy of this '70s paranoia thriller from the local used record store. It was available on Netflix Streaming a couple years ago, and was the very reason I signed up for it, as the DVD was/is out of print. And goddamn if the very next day its expiration date had passed!

Movie-watching was so much more satisfying when we had the neighborhood video rental store that stocked a library/archive of classics as well as cult classics. On Netflix Streaming, the classics are absent ("too expensive to license"), and the cult classics fade in for a couple months and then expire for good. And forget about renting anything from Redbox that's older than a few years. But moving on...

Striking art relies on strong contrasts or juxtapositions, firing the opposite ends of some spectrum in our brain at the same time. Hot and cool colors, for example. Done right, the combination produces a heady effect in the viewer. When the creators are no longer capable of, or the audience becomes uninterested in such striking visuals, the balance is thrown in either of two directions -- toward washed-out, slurry homogeneity (The Matrix) or toward collage-like information overload (Requiem for a Dream).

So, even if you find yourself unable to resonate with the plotlines, dialogue, or characterization of older movies (and they did those things better too), you should at least be able to enjoy their visual awesomeness. And The Parallax View is no exception, thanks largely to the cinematography of the legendary Gordon Willis (The Godfather series, All the President's Men, Manhattan). Movies have to be seen to be believed, but for this one still images give a very good idea since the camera doesn't move around very much and there is generally not a lot of fast-paced action.

What follows is a review of the major contrasts employed throughout the movie. Perhaps later in the week I'll follow up with a short discussion of what larger artistic traditions it is a part of, and why certain styles achieve greater success in some periods rather than others.

First, and most memorably, the contrast in size between tiny human-scale figures vs. gigantic architecture. Whether it's a scuffle on top of the Seattle Space Needle or strolling below your corporate skyscraper, taking a quiet ride up the escalator or standing trapped below a dam whose gates have just opened up, the helplessness of the characters is made palpable. (Click any image to enlarge.)





However, the aesthetic of powerlessness is not used for voyeuristic, masochistic effect -- whereas contemporary audiences seem to get off on futility porn -- but to serve as a rude awakening. Sometimes all that happens is the guy reaches the top of the escalator, but then other times the guy plummets off the roof to his death. It humbles the audience, awakening a feeling of dread about the uncertainty and topsy-turviness that human beings experience in such over-powering environments.

Next is the contrast between dark and bright lighting. Natural lighting conditions tend to produce more uniform levels -- brighter when the sun's out, darker when it's gone. Seeing both vivid brightness and saturated darkness at the same time provides a powerful yet unconscious appreciation for how unusual, even unsettling, the circumstances are. And because the brightness comes from human ingenuity (fires, lamps, lightbulbs), it also subtly massages the parts of our brain that respond to the natural vs. the artificial at the same time.





Bright vs. dark for lighting leads into sharp vs. blurry for focus. As I discussed here, this is so much more striking when the movie is shot with an anamorphic lens. The shallow focus makes the object we're supposed to be looking at very crisp, and everything else in front of or behind it more blurry. That stylizes the mise-en-scene so that we don't have to explore every little detail and ask "Is this important?" as we would in real life. Giving too much freedom to the audience takes us out of the experience, which should be more like going on a ride and getting lost in the moment.

Shallow focus also allows the movie to tease us with only hazy hints of what's going on in the background as we attend to the main action in the foreground. We may be able to see the rough forms of human faces, but what exact expressions are they making -- what are their plans for those in the foreground? Or maybe the foreground face looks clearly upbeat, while the background faces -- we can't quite tell -- look a little nervous. Why?

This is one of the most effective techniques for building tension that will be resolved once the figures move to occupy space at the same distance from the camera, allowing us at last to see clearly how they'll interact. Details that are crisp at too many distances = giving away all information up front, no suspense.




Finally, and related to the focus contrast, is the contrast between open vs. occluded space. Both prevent a deep 3-D depth perspective from forming. Instead of -- or in addition to -- making certain things more salient by making them sharp and the rest blurry, the artist can also place those things so that they're transparent to our line of sight, and just shove huge opaque objects in the way to block out the rest. But in order for it to not look so ham-fisted, they can't look like stage props that were dumped right into the foreground; they have to be jutting in from the edges of the frame.

This technique produces a similar tension in the audience that shallow focus does. When the background is blurry, we want to squint a little bit and make out some important details better -- not all, but at least something like facial expressions. We feel slightly frustrated and anxious. When those details are instead occluded by large planes, we want to crane our neck or walk over to take a different viewpoint, one that'll allow more to come into view. It's like, "if only that damn camera would move, we'd get a better angle!" Though again, giving the audience too much freedom would rob them of the thrill of feeling suspense.

Along with the tiny vs. gigantic size contrast, the open vs. occluded space contrast gives the film its signature look. It appears constantly throughout the movie, so any single image may not do justice to the feeling you pick up over the course of watching it. It's used to greatest effect in exterior scenes, where occlusions that take up two-thirds of the frame create a sense of claustrophobia in what should be the wide-open outdoors. So fucking awesome-looking.

Below I've included a three-frame sequence of a plane heading for the runway, that's just had a bomb planted inside it. We want to know what's going on in there, yet that damn building is standing right in our way! This occluded shot goes on for what seems like forever, and by the end the suspense is killing us.








Sometimes the effect is not so strong, as when it's the walls of a building that blocks our sight, with only a small window for transparency (see below). I think this fails to produce tension in the viewer because we know that no matter where we placed the camera, and no matter what angle we looked at, we could never get around the mighty enclosing walls of that building. So we don't sweat it and accept the limited view with little anxiety. Only if we could switch to a more open view, yet are frustrated in that by the placement and angle of the camera, do we start to feel nervous.


The only major contrast that they didn't exploit is hot vs. cool colors, for example contrasting reds and oranges with blues and greens. Come to think of it, most of the otherwise sublime-looking movies of the '70s don't go there either, such as Chinatown or Three Days of the Condor. Color contrast had to wait for the decade of neon blue and neon pink. Throughout the '80s, the earlier contrasts were still being used, but now also the full wild rainbow of color. Blade Runner, Manhunter, and Black Rain hypnotized audiences with color contrast not seen since Art Deco architecture, Empire/Regency design, or the painting of El Greco and Rubens.

In fairness, The Parallax View is meant to be more naturalistic than supernaturalistic in its atmosphere of menace and paranoia, and a kaleidoscope of colors would have detracted from that. Still, although that choice is the best suited to the larger goals of the film, it does leave the viewer wanting something on a purely visual level. Again, it doesn't matter what you think about the plotlines, dialogue, and characterization of the '80s movies mentioned above -- their other-worldly narratives open up an even greater range of sublime images.