Interstellar: When a Good Movie Goes Bad

(Image Credit: Paramount Pictures, Warner Bros.)

Christopher Nolan’s Interstellar arrived in theaters in November 2014, a science fiction epic by the critically acclaimed director of Memento and the Dark Knight trilogy. The film stars Matthew McConaughey, Anne Hathaway, Jessica Chastain, and Michael Caine and focuses on a journey through time and space to save humanity from extinction due to a worldwide outbreak of blight destroying crops and depleting the planet’s oxygen. To save the world, McConaughey leads a NASA mission through a wormhole to another galaxy, looking for a suitable planet to be the next home for people.

Interstellar received mostly positive reviews upon its release, with A.O. Scott of the New York Times declaring the film full of visual dazzle and thematic ambition, “a sweeping, futuristic drama driven by grief, dread, and regret.” Other critics deemed the film “a unique and mesmerizing experience” and “a cosmic adventure story with a touch of the surreal and dreamlike.” It eventually received the Academy Award for Best Visual Effects and made $677 million worldwide, the film serving as both a critical and commercial success and cementing Nolan as one of the great filmmakers of the twenty-first century and endearing him further to his rabid fan base.

There are a lot of things I like about Interstellar. It is a breathtaking film to watch, with impressive imagination going into designing the “interstellar” worlds. Nolan experimented further with 70mm, with several scenes in Interstellar filmed with Imax cameras, before eventually deciding to use the format for an entire film. While some criticize the music of Hans Zimmer for being overly bombastic, I believe it works in this film, helping to convey a sense of the grandiose beyond our imagination. Like DunkirkInterstellar is a great film when it comes to technical composition, with cinematography, music, production design, and visual effects coalescing into a film that is truly a visual experience in the same manner as Stanley Kubrick’s 2001: A Space Odyssey, still one of the seminal works in the science fiction genre.

Though impressive as a technical achievement, movies are judged based on their narratives and performances along with their production. It is in the storytelling that Interstellar falls short of being a great film. Far from being completely ridden with clichés or being incomprehensible, Interstellar has an engaging storyline that completely falls apart at a very specific point in the film. Before revealing the precise moment Interstellar derails, there are two aspects frequently cited as complaints against the film that do not work against the narrative.

Time Dilation

One of the critiques levied at Christopher Nolan, whether in jest or not, is that many of his films “mess with time,” featuring nonlinear structure or cross-cutting different stories to let the audience experience the same chaos the characters are feeling or to catch the movie viewer off guard. MementoBatman BeginsInception, and Dunkirk all employ some element of nonlinear storytelling, a motif in Nolan’s filmmaking in the same manner as banter dialogue at a table in Tarantino films or general terribleness in Raja Gosnell films.

Interstellar features a rapid progression in time, but grounds it in the concept of “time dilation” in physics. When McConaughey and the team arrive on the first potential planet, they note that the gravity is significantly greater than on Earth, warping time to such a degree that one hour spent on the planet would equal seven years on Earth, making completing the mission quickly of utmost importance. Things go awry, and the delays cause twenty-one years to pass on Earth. Even if the viewer does not have a background in physics, this does not derail the movie, it is simply a plot device to make the situation more dire. Moreover, it allows McConaughey’s daughter to age into adulthood and become more useful in solving the gravity equation that has stumped Caine’s character for years. Though involving hard scientific concepts, the time dilation simply advances the action and adds greater consequence to the actions of the characters.

Scientific Jargon

Interstellar relies on scientific realism, specifically the ideas of theoretical physicist Kip Thorne, to set the narrative in motion, opting for a science fiction movie grounded in realityas opposed to the futuristic technology and fantasy elements used throughout the genre. Concepts like gravitational waves, wormholes, and black hole cosmology owe to Thorne’s work, and Thorne himself laid out guidelines before consulting on the film:

First, that nothing would violate established physical laws. Second, that all the wild speculations, and there certainly are some here, would spring from science and not from the fertile mind of a screenwriter.

Thorne, Neil DeGrasse Tyson, and Michio Kaku were very pleased with the film’s adherence to scientific realism, grounding itself in science rather than fantasy. But, to adhere to this format, there are many instances throughout the film where the plot stops so the characters can explain the science to the audience, showing that Nolan and the film’s crew indeed “did their homework.” Again, this is a lot of hard science, but like “messing with time,” it does not detract from the movie. The use of science gives the film a personality much different from many other science fiction films; the fact that the people of Earth have few options due to our limited knowledge of physics makes the conflict more critical than if we could simply travel at the speed of light to another galaxy. Moreover, like the time dilation, the audience doesn’t have to know the equations and theories behind the science, just that the characters are in a grim situation and have limited options to save humanity. Scientific realism gives Interstellar style and personality much different from films like the Star Trek, Star Wars, or Alien series, with abstract science highlighted by technical mastery to produce a visual masterpiece.

And, then it all falls apart

Though well made from a technical standpoint, with a lot of thought and care devoted to the visuals, cinematography, and science, Interstellar completely falls apart in its third act, resulting in a film that is simply “good” even though it had the potential to be great. After completing a “Heart of Darkness but in space” arc in its second act, McConaughey and Hathaway are stranded in a barely functional space station hurtling toward a black hole. Hathaway resolves to go to the final potential planet carrying fertilized human eggs, “Plan B,” while McConaughey launches into the black hole Gargantua, allowing the space station to move in the opposite direction based on Newton’s third law of motion. Rather than being ripped apart atom by atom due to the gravitational pull of the black hole, McConaughey enters a tesseract, a five dimensional space that allows him to move through time as a physical dimension, in order to reconnect with his daughter and relay information from the black hole and complete the gravitational equation to bring all of Earth’s people through the wormhole into the new galaxy.

The scene in the black hole fundamentally derails the movie, undermining the previous elements of scientific realism and relying on “the power of love” to resolve the central conflict of the film. McConaughey’s actions in the black hole represent a shift from established to more theoretical and speculative physics, with humans from the future able to perceive five dimensions supposedly responsible for creating the tesseract. Furthermore, McConaughey can supposedly communicate with his daughter because of his love for her, as “love is able to transcend time and space.” Because we are able to love people despite the distance as well as loving people who died, it is love that allows McConaughey to connect with his daughter despite being in another galaxy. Though mentioned briefly, this is not a theme that carries throughout the entire film. In a movie so reliant on realism and scientific relations, resolving the conflict of the film through emotional connection is a jarring shift that deviates from the previously established themes and atmosphere. As movie critic Bob Chipman stated,

Interstellar actually still wants to be about humanity, human beings, families, emotional connections, and, well, feelings. Which means, its actual thematic underpinnings could not be less suited to Christopher Nolan.

Though the director behind many beloved movies, Nolan is not known for emotions being at the core of any of his films, with style, imagination, and technical wizardry superseding heart. This is not necessarily a bad thing, after all many of Nolan’s movies are still very good , and not every work needs to focus on an emotional core. However, to suddenly rely on emotions to solve the crisis of humanity breaks from the rest of the film, likely a relic from the original draft when Steven Spielberg was set to direct. The “power of love” as the savior of humanity is certainly noble in conception, but contradicts the rest of Interstellar, resulting in a movie that ultimately doesn’t know what it wants to be. Certainly a technical achievement, an experience worth watching for the imagination and design, Interstellar’s narrative shortcomings prevent it from being a great film despite having the potential to be one.

How to (Preemptively) Fix Rush Hour 4

(Image Credit: New Line Cinema, Warner Bros.)

On the February 22 episode of The Undefeated’s (owned and operated by ESPN) “The Plug” podcast, comedian and actor Chris Tucker announced a sequel to the Rush Hour franchise, a series of buddy cop films dating back to the release of the original Rush Hour 1998. Tucker was quoted as saying,

“It’s happening. This is gonna be the rush of all rushes. Jackie is ready and we want to do this so that people don’t ever forget it.”

Although Tucker made his announcement in February, because I do in fact live under a rock, I did not find out about this information until very recently. Though I am certainly a fan of the franchise, having a certain nostalgia for Jackie Chan, Chris Tucker, and comedy from the 1990s, upon learning about a potential fourth sequel, I was perturbed rather than excited. A sequel in the Rush Hour franchise released years after the last entry, featuring callbacks and in-jokes that only make sense if you watched earlier films? We had that already, it was Rush Hour 3. Do you remember Rush Hour 3? It was absolutely terrible! A lazily written, cynical cash grab damaging a fun (even if not necessarily perfect) film series.

Thus, in an effort to make sure a travesty like Rush Hour 3 doesn’t happen again, I have a basic outline of suggestions on how I would preemptively “fix” Rush Hour 4. Obviously, I’m not a professional screenwriter, but as a fan of a series I still believe has value and something to say in today’s sociocultural climate, I believe these suggestions would go a long way to create a satisfactory film.

Go Meta

Quick, off the top of your head, what is the most memorable buddy cop movie of the past decade? That would be 21 Jump Street, right? Some of you in my theoretical audience may have suggested The Other Guys. What do those movies have in common? They both deconstruct the buddy cop genre, poking fun at the predictable scenarios and character clichés. The sequel to 21 Jump Street22 Jump Street, goes further and makes fun of the “bigger and better” edicts of Hollywood sequels as well as the sheer ridiculousness of franchise building.

A new Rush Hour sequel should follow this model, using meta humor rather than simply pretending that the likely formulaic plot is fresh and new. Rather than simply, “here we go again,” make fun of the fact that Agent Lee (Chan) and Agent Carter (Tucker) are still running around having to deal with the same villains and threats as before. Moreover, make fun of the fact that these guys are back in a superfluous sequel, with the new curmudgeon police chief criticizing Lee and Carter as out of time in place, a la “it’s 2018, we don’t need you anymore.” Going in the meta direction would result in a thoughtful and complex film, with the two leads working to overcome the villain of the film while also making the audience care and root for both the actors and the movie to succeed in spite of the internal and external meta criticism of its own existence.

Allow the Leads to be Older

When the first Rush Hour premiered in 1998, Jackie Chan was a 44-year-old actor still performing his own stunts and imbuing martial arts action with comedic timing that separated him from his peers. Chris Tucker was 26 years old and a rising comedic movie star, gaining fame for his memorable roles in The Fifth Element and Friday along with Rush Hour. Chan has attempted to move on from strictly action roles to diversify his filmography, relying less on stunt work that he simply is not able to perform anymore. Tucker, by contrast, largely disappeared from stand-up comedy and Hollywood after becoming a born-again Christian, reprising his role in the Rush Hour sequels being is only major role in the 2000s. Simply put, both men are twenty years older and significantly different from the people they were in the 1990s.

Rush Hour 3 ignored the basic reality that its stars were older actors, hoping that audiences would forgive the six-year gap between film releases and be content with a repackaged plot and “remember this” in-jokes. Remember, Rush Hour 3 was a lazy cash grab that we’re trying to avoid in this thought exercise. Borrowing from the meta humor, a fourth Rush Hour sequel should draw attention to the fact that Lee and Carter are older. This does not mean make fun of Chan and Carter for being old, what some films think constitutes self-awareness. Instead, how have Lee and Carter’s worldview changed since the 1990s? How do these men respond to the fact that they’re getting older, their limitations, a changing social and political climate since the 1990s? Hell, give us a dramatic conversation between these two, supposedly good friends off-screen, where they can, gasp, show off some acting chops. A sequel with self-referential humor and some emotional weight to it, not taking itself too seriously but also having a truly serious moment or two, would be a memorable and thoughtful film, a hard feat to achieve in a franchise spanning decades.

End It

This is likely the hardest step to follow, for a Hollywood studio will always try to extend or revive a franchise as long as it is making money or has the potential to make money. But, additional Rush Hour sequels would just extend and dilute the brand more than it has been already. Lethal Weapon, Police Academy, Beverly Hills Cop (they are making a fourth one), and, as of now, Rush Hour, all overstayed their welcome specifically because they continued to rely on formulaic plots and stale jokes; the studios simply wanted to make money rather than making good movies. Ideally, the Rush Hour franchise would have ended after the second film, but since that’s not the world we live in and a fourth film seems to be a reality, it should be the final entry, a fitting conclusion to a franchise rather than recycled references and jokes stuck in the 1990s. Following the outline laid out would result in a thoughtful, introspective work, recognizing the superfluous nature of its existence while also having something to say about its characters, the world of today, and getting older. Just because it is the fourth film in a franchise does not necessarily mean Rush Hour 4 has to be terrible. A film made with care and consideration would elevate the franchise and the genre itself, offering insight in a genre that often lacks it.

Hey Arnold, Class, and Inequality: A Tale of Two Rhondas

(Images Credit: Nickelodeon)

Hey Arnold! was one of the original cartoons (“Nicktoons”) airing on Nickelodeon in the late 90s and the early 00s, running from 1996 to 2004. The series centered on the life and experiences of Arnold Shortman*, a nine-year-old boy with a football-shaped head living with his grandparents in a boarding house located in Hillwood, an urban amalgamation of New York City, Seattle, and Portland. Many episodes focused on Arnold navigating life in the city, dealing with the rigors of school, urban legends, or childhood adventures with his friends and local denizens. While early on Hey Arnold! concentrated on Arnold as the central character, over time the series turned attention to secondary characters, with Arnold nearly becoming a superhero to help them deal with their problems.

Over the past few weeks, I’ve been revisiting the series, which airs on TeenNick’s “NickSplat” block from 11pm-midnight in an effort to capitalize on nostalgia for childhood cartoons. Compared to other Nickelodeon cartoons, Hey Arnold! holds up fairly well, with surprisingly adult story lines and a setting and characters that are not too dated to enjoy today. Two episodes in particular stand out, possessing eerily similar plots and themes but reaching vastly different conclusions. These two episodes, “Rhonda’s Glasses” and “Rhonda Goes Broke” deal with class and inequality, with one episode handling these issues well and the other botching the message so poorly that it stuck with me, motivating me to take time and critically analyze a children’s show from almost twenty years ago (and now I feel old).

In the show, Rhonda is the rich, popular girl in Arnold’s class, wearing only the latest fashion and flaunting her wealth and status as a demonstration of superiority, much to the dismay of her fellow classmates. Rhonda-centric episodes of Hey Arnold! often focus on the drawbacks of Rhonda’s wealth and her behavior, with “Cool Party,” in which Rhonda invites only “cool kids,” and “Polishing Rhonda,” where Rhonda goes to finishing school, showing the limitations of wealth and the detriments of snobbish and condescending behavior. The lesson in episodes centered on Rhonda largely offer the message, “don’t act like Rhonda,” with Rhonda’s “cool party” ending up being anything but and Rhonda only succeeding at finishing school by completely changing her attitude to be humble.

Likewise, “Rhonda’s Glasses” and “Rhonda Goes Broke” fixate on Rhonda’s behavior, aiming to teach kids once again that conceited behavior and treating people as lesser simply because of differences in wealth or status is mean-spirited and wrong. The former episode begins with Rhonda sending a new kid, coded as a “nerd” due to glasses, quiet demeanor, and lack of confidence, to the “geek seats” in the back of the bus, where the “geeks” experience motion sickness and inferior treatment simply because of their status as geeks.

Rhonda Bus
Image Credit: Nickelodeon, http://heyarnoldreviewed.blogspot.com/2016/05/s2-e33-eating-contest-rhondas-glasses.html

Eventually, Rhonda takes and fails a vision exam, requiring glasses in order to see. However, wearing glasses defines Rhonda as “a geek” and she is also banished to the back of the bus with the rest of the “geeks,” experiencing the same humiliation and substandard treatment she inflicted on them.

Rhonda 815
Enter a Image Credit: Nickelodeon, http://heyarnoldreviewed.blogspot.com/2016/05/s2-e33-eating-contest-rhondas-glasses.html

Rhonda suffers further humiliation and sub par treatment, sitting with the other geeks at a broken lunch table and playing with a flat ball at recess, until she decides she’s mad as hell and won’t take it anymore. She urges her fellow geeks to rebel, inspiring them to proclaim her “queen of the geeks” and resist the unwritten rules of “geek seats” on the bus. The episode ends with Rhonda inviting the previously shunned new kid to sit with her at the front of the bus, learning her lesson and becoming a better person.

“Rhonda’s Glasses” features a “want vs. need” character arc involving Rhonda as well as lessons for children regarding class and inequality. At the beginning of the episode Rhonda acts like a jerk, believing herself to be superior based on an artificial class system based on popularity, largely informed by wealth. Upon receiving glasses, she initially “wants” her life to go back to normal, to be rid of her new spectacles and retain her place at the top of the social hierarchy. But, after experiencing life as a geek, she recognizes the unfairness of the entire system and rebels; she “needs” to treat people better and do away with an unfair system, even if she benefited from said system. The broader lesson in this episode, particularly important for a children’s program, is to treat people fairly; one should not act condescending based on status, in this case social class within the school system. Rhonda and the intended audience of children both learn that differences are often artificial, with discrimination based on perceived differences being immoral and violating the golden rule of “treating others is you would want to be treated.”

“Rhonda’s Glasses” aired on December 7th, 1997, and offered a positive message for children as well as an accessible understanding of class difference and discrimination. A little over three years later (January 5th, 2001), Nickelodeon aired the episode “Rhonda Goes Broke.” On the surface, this episode appears to follow a similar formula and offer the lesson to not act like the conceited version of Rhonda. The episode begins with Rhonda flaunting her clothing, wealth, and impeccable sense of fashion, once again acting snobbish and superior to her classmates based on her status. Upon returning home, she learns that her parents lost all of their money and they are now poor, forced to move into the boarding house where Arnold lives. Initially, Rhonda pretends that everything is normal, claiming that she will soon receive new clothes and go on expensive vacations in Aspen, but it is finally revealed by another rich classmate that Rhonda’s family is poor, so poor that they cannot afford food or clothing.

Rhonda_Goes_Broke
Image Credit: Nickelodeon

Rhonda cries and laments, wanting to go back home and “be rich again,” explaining to Arnold that being rich is the one thing she’s “really good at.” Arnold reprimands her, calling her pathetic and arguing that just because she is no longer rich doesn’t mean she is no longer Rhonda, “unless being rich is all she’s about.” Rhonda takes this advice to heart and makes her own clothes, keeping up her fashionista persona despite the fact that she can’t afford expensive clothes. Her classmates are impressed with the new Rhonda, and at this point, the “want vs. need” arc is complete, Rhonda learning once again that wealth and status are not what truly makes her identity and that putting people down because of their lack of wealth or status is not appropriate behavior.

But, the episode doesn’t end there. No, the episode ends with her family’s stock “bouncing right back” and the Lloyd’s being rich again (even the musical cue finds this ridiculous (minute 22:22)). Rhonda completely reverts back to old Rhonda, flaunting her wealth and offering Arnold a tip for his help and his advice, as “people in his position appreciate these things.” Rhonda is rich and back to her previous behavior, completely negating the story arc and her character development.

While “Rhonda’s Glasses” successfully handles the lesson of class and inequality, explaining to the audience that status distinctions are often artificial and should not be used to discriminate, “Rhonda Goes Broke” muddles its message, contradicting its story arc for a mean-spirited joke. The lesson in the former episode, “discrimination is wrong,” is a positive lesson for children, whereas the ending of the latter episode seems to argue, “be yourself, unless you can be rich. Then be rich.” Though both episodes follow a similar trajectory and focus on the same themes, one episode effectively handles the narrative and lesson with the other completely mishandling it. When looking to entertain and educate children, this becomes more important and negative lessons can be detrimental in the short and long-term.

*Though a running joke throughout the series was that Arnold did not have a last name, in the recent television movie, Hey Arnold: The Jungle Movie, it was revealed that his last name was indeed “Shortman,” a nickname used frequently by Arnold’s grandfather.

Revisiting Troy (2004): An Anti-War Epic?

(Image and Video Credit: Warner Bros.)

Wolfgang Petersen’s film Troy, a  (very) loose adaptation of Homer’s Iliad, hit theaters in 2004, part of a wave of sword-and-sandal epics throughout the early 2000s that looked to achieve the same critical and commercial success first enjoyed by Gladiator (2000). The film stars Brad Pitt as Achilles, Eric Bana as Hector (the role in between the abomination that was Ang Lee’s Hulk and Stephen Spielberg’s critically acclaimed Munich), Orlando Bloom as Paris, Brad Pitt’s abs, Brad Pitt’s terrible accent, Sean Bean as Odysseus, Brian Cox as Agamemnon, Diane Kruger as Helen, Brad Pitt’s terrible accent again, Peter O’Toole as Priam, and Brad Pitt’s terrible accent a third time. Rather than a direct translation of the Greek epic poem, Peterson’s film ignores the mythological aspects of the conflict and condenses the war from 10 years to a few days of fighting. Instead of a retelling of an ancient story, the film alters the narrative of the Trojan War in an attempt to make history modern.

In this vein, one particular scene stands out as a reflection of contemporary events rather than sole focus on a conflict of myth and legend. After Achilles and his Myrmidons establish a foothold on the beach of Troy, leading Agamemnon and the Greek kings to believe the war will be brief and a swift victory, the combined Greek army fails to breach the city walls and suffers heavy casualties. That night, Agamemnon laments to Odysseus and his adviser Nestor, “They’re laughing at me in Troy, drunk with victory! They think I’ll sail home at first light.” Nestor replies, “If we leave now, we lose all credibility. If the Trojans can beat us so easily, how long before the Hittites invade?” Odysseus follows, ” If we stay, we stay here for the right reasons: to protect Greece, not your pride” before urging Agamemnon to make peace with Achilles to defeat the Trojans.

It is at this point, factoring the contemporary context for a film released in 2004, that one cannot help but think about the state of the War in Iraq at the time. The invasion of Iraq began on March 20, 2003, with coalition forces sweeping through the country and ousting Saddam Hussein in seemingly decisive fashion, leading President George W. Bush to declare an end to major combat operations on May 1, 2003, the now infamous “Mission Accomplished” speech.

Of course, the War in Iraq did not end after the initial invasion, as continued violence and atrocities mired the region and bogged down military forces. This along with the revelation that the United States entered the war under false pretenses, with the Bush administration allegedly making 935 false statements as part of an orchestrated campaign to effectively galvanize public opinion and misrepresent the security threat posed by Iraq under Hussein, led domestic and international public opinion to turn against American military involvement. Likewise, the initial success and hubris of the besieging Greek army was met with a protracted conflict that did not go according to plan. Agamemon’s offensive against Troy, with the stated casus belli to rescue Helen, differed from his true motives, extending Mycenaean power and influence across the Aegean. For both Agamemnon and President Bush, desert based offensive military action devolved into a war they dare not lose rather than one that could be won.

For those that would argue that drawing parallels between the plot of Troy and the events of the War in Iraq is reading into the text after the fact, Wolfgang Petersen, the director of the film, would like a word with you. Petersen himself equated President Bush’s actions to those of Agamemnon in his film, explaining,

“Just as King Agamemnon waged what was essentially a war of conquest on the ruse of trying to rescue the beautiful Helen from the hands of the Trojans, President George W. Bush concealed his true motives for the invasion of Iraq.”

In press notes for the film, Petersen contended, “I don’t think that any writer in the last 3,000 years has more graphically and accurately described the horrors of war than Homer,” believing that the poem itself revealed that the Trojan war was a disaster for everyone. In a German interview before the film’s release, Petersen lamented, ”People are still using deceit to engage in wars of vengeance. […] It’s as if nothing has changed in 3,000 years.” Rather than parallels emerging out of coincidence, Petersen’s understandings of geopolitics and conflict in 2004 fundamentally influenced his adaptation of Homer’s Iliad. Rather than a direct adaptation leaping directly from the source material onto the screen, for better or worse, Petersen’s understanding of the present shaped his narrative of the past.

So, with the director’s work undoubtedly informed by the unfolding of the War in Iraq, looking to condemn those in power using deceit for their own aims as well as the violence and bloodshed caused by war, why are the most memorable moments from this movie the scenes of violence and bloodshed? Though the film depicts Achilles as nothing short of a jackass until he finally fights for more than himself by rescuing Briseis, his most famous on-screen moments are battle scenes portrayed as “cool” rather than brutal or sickening. In an effort to replicate the success of sword-and-sandal epics of the past, Troy is built around its battle scenes and grandiose set pieces, muddling the director’s anti-war sentiment and the intended message of the film. Instead of a clear repudiation of warfare, something that would truly separate Troy from other films like it, what results is the South Park method of “having your cake and eating it too” at best or a movie that chickened out on its premise at worse.

In her analysis of feminist theory in the Transformers franchise (really), Lindsay Ellis explains that though it is Megan Fox’s character of Mikaela Banes that truly has a character arc in the first two films in the franchise, overcoming her past positively adapting to the new world of machines, the framing and aesthetics on how Fox is shot on film casts her as eye candy rather than a character in her own right. “Framing and aesthetics supersede the rest of the text, always, always, always,” an axiom that rings particularly true for Wolfgang Petersen’s Troy. Rather than an epic truly devoted to a protest of war, Troy limits its effectiveness as an anti-war film through cinematography and production design overtaking the text.

Nationalism for Children: How ‘Avatar the Last Airbender’ Explores Violence and National Identity

(Image Credit: Nickelodeon)

(This article was originally published October, 2017)

Avatar: The Last Airbender aired on Nickelodeon from 2005–2008, chronicling the adventures of Aang, a 12-year-old boy who is the most recent incarnation of the Avatar — the person who can bend Water, Earth, Fire and Air — and his companions Katara, Sokka, Toph, and eventually Prince Zuko. Aang and his cohort (calling themselves “Team Avatar”) must put an end to the war with the Fire Nation, led by Fire Lord Ozai.

Featuring complex characters, mature storylines, great animation, a celebration of Asian cultures and a successful balance of humor, tragedy and Eastern philosophy, many consider ATLA one of the best cartoons of all time, a show that adults and children can enjoy. ATLA treated its audience with respect and addressed adult themes and concepts despite being a children’s program.

In particular, ATLA tackles the concept of nationalism, an ideology involving national identity formation and acting in the name of protecting and promoting this identity, through its portrayal of the Fire Nation. Rather than simply a generic term to signify difference from the Earth Kingdom, Water Tribes and Air Nomads, the Fire Nation embodies the development of national identity and horrifying results of hypernationalism. The Fire Nation represents the epitome of national identity formation, an “imagined community” defining “self” as opposed to “the other” and the ultimate extension of this logic.

In 1983, political scientist and historian Benedict Anderson published Imagined Communities: Reflections on the Origin and Spread of Nationalism to theorize how people tied identity formation to the idea of a nation, leading to nationalist ideology and political agitation to form a nation state. Rather than being a primordial identity, national identity formation only occurred through “imagining a community” fostered through modern printing (what he calls print capitalism) and a decline in the belief in “rule by divine right,” both of which occurred as a result of the Industrial Revolution.

An “imagined community” links a diverse group of people under a single identity. For example, a person in Oregon and a person in South Carolina are both “American” despite separation by thousands of miles and likely never meeting face to face. The conceptualization of the nation explicitly delineates a physical space for the “imagined community” with borders separating “self” from “the other.”

world

The world of ATLA, with regions defined by the elemental benders that live in the respective area. ‘Avatar: The Last Airbender’ [Credit: Nickelodeon]

While Waterbenders constitute northern and southern tribes, Airbenders populate specific temples, and Earthbenders live in the Earth Kingdom, Firebenders define their region as the “Fire Nation,” a space for an imagined community of Firebenders to live and operate. Unlike the other benders, the Firebenders consider themselves a nation, a continent for the “self” as opposed to the “other.” Nationalism can certainly be benign, a tool to unite a diverse population or a method to refute colonial rule, but Fire Lord Sozin and Fire Lord Ozai represent heightened nationalism that serves as a threat to “the others” in the world of ATLA.

In the Season 3 episode “The Avatar and the Fire Lord,” Fire Lord Sozin shares his idea for a better world with Avatar Roku, Aang’s predecessor.

“Our nation is enjoying an unprecedented time of peace and wealth. Our people are happy, and we’re so fortunate in so many ways. […] we should share this prosperity with the rest of the world. In our hands is the most successful empire in history. It’s time we expanded it.”

The Fire Nation, an industrial power utilizing coal-powered tanks and warships, represents the pinnacle of civility and modernity according to Sozin (modern industrialization being a prerequisite to nation formation per Anderson). He asks Roku — his best friend and fellow Fire Nation citizen — to extend his nationalist project, looking to form colonies in the Earth Kingdom and spread Fire Nation culture throughout the world. Being the Avatar, Roku recognizes the important of balance between the four elements and their landholdings, leading him to reject Sozin’s proposal. Later in the episode, when Roku chastises Fire Lord Sozin for setting up Fire Nation colonies despite his warning, Sozin replies:

“How dare you, a citizen of the Fire Nation, address your Fire Lord this way. Your loyalty is to our nation first. Anything less makes you a traitor.”

Sozin declares that Roku has betrayed his nation, the imagined community of the Fire Nation. Roku remained the biggest obstacle to Fire Lord Sozin’s ambition, but with the Avatar’s death, Sozin advances his project, extending the ideology of “self” and “other” to its most extreme.

Roku and Sozin

Roku and Sozin, both Fire Nation citizens, but with vastly different ideologies. ‘Avatar: The Last Airbender’ [Credit: Nickelodeon]

After the death of Avatar Roku, Fire Lord Sozin recognizes that the next Avatar in the cycle will be an Airbender, leading him to conduct the genocide of the Air Nomads, leaving Aang to discover that, after being frozen for 100 years, he is the eponymous “last Airbender.”

Gyatso

The remains of Monk Gyatso, Aang’s mentor, after the Fire Nation genocide. ‘Avatar: The Last Airbender’ [Credit: Nickelodeon]

In the series finale, Fire Lord Ozai attempts a similar genocidal campaign, harnessing the power of Sozin’s comet and adopting the title of Phoenix King, aiming to burn down the entire Earth Kingdom and establish himself as the ruler of the world. Though violent and horrifying, the actions of Sozin and Ozai represent the ultimate extension of national identity formation, where the “self” removes or eliminates the “other” to maintain the homogeneity of the nation.

Phoenix King

Ozai, bequeathing the title of Fire Lord and becoming the Phoenix King. ‘Avatar: The Last Airbender’ [Credit: Nickelodeon]

Rather than being separate from or a corruption of nationalism, ethnic cleansing and genocide have served as processes of national identity formation throughout history. Totalitarian regimes use violence to quell threats to the ruling power, while the Armenian Genocide and the Nazi Holocaust represent the worst of human atrocities conducted in the name of Turkish nationalism and fascist ideology, respectively, “protecting the self” from “threats to the nation.” Thus, the Fire Nation is the epitome of national identity formation, an imagined community defining “self” and “other” while using violence as an extension of nationalist rhetoric.

Rather than shy away from mature themes or adult concepts such as nationalism or mass violence, ATLA embraces big ideas and presents them to a young audience while balancing them with fast-paced action, humor and heart. Though nationalism does not automatically result in violence and is not inherently evil, the Fire Nation demonstrates the harm in hypernationalism, a rigid definition of “self” that precludes others.

Not only do show creators Michael Dante DiMartino and Bryan Konietzko illustrate the dangers of rampant nationalism, but they also present a solution: multiculturalism. Aang, Katara, Sokka, Toph, Suki and their allies hail from all across the globe. The embrace and celebration of differences helps the protagonists resist the attempt by the Fire Nation to eliminate difference, a worthy lesson coming from a smart, well-written and forward-thinking television program for children.

 

Offensive Marketing? Explaining the Aggressive Marketing of Jurassic World: Fallen Kingdom

(Image Credit: Universal Studios)

So in case you’ve been living under a rock, you are likely aware that Jurassic World: Fallen Kingdom opens this weekend. Hell, even if you have been under a rock (I assume the rent price was too agreeable to pass up), you’ve likely seen an advertisement for the fifth film in the Jurassic Park franchise. The marketing for this film, a direct follow-up to the decent 2015 film with an amazing final sequence, has been inescapable. Last week, NBC dubbed its programming part of “Jurassic Week” (sadly, none of the dinosaurs ate Jimmy Fallon) and Dairy Queen is serving a “Jurassic Blizzard” (sadly, it is not made with pieces of actual dinosaurs) along with the inundation of commercials across NBCUniversal stations.

Aggressive marketing campaigns for studio tent-poles, especially big budget sequels, are nothing new, stoking interest among the general public and creating a sense of demand to see the new film immediately, with box office grosses relying on a substantial opening weekend. With two and three year gaps typical for movie franchises, advertisements aim to rekindle interest in the brand and ensure that fans show up for opening weekend.

But, the marketing for the new Jurassic World seems particularly saturated and largely unnecessary. After all, the preceding film opened to $208 million, the largest opening weekend at the time until the release of Star Wars: The Force Awakens later that year, and went on to gross almost $1.7 billion dollars worldwide, the fifth highest grossing film of all time. Through nostalgia for the original film and the continued interest in the property through sequels, toy sales, and, umm, questionable video games, it would seem as though everyone is aware that of the Jurassic Park franchise, making the aggressive marketing of the new film superfluous and potential detrimental through overexposure.

Yes, the advertisements for Jurassic World: Fallen Kingdom are too much. But, rather than an attempt to capture new fans or spark interest among people unaware of the film, the aggressive marketing of the new dinosaur flick is a defensive measure by Universal Studios, almost an act of desperation to ensure a big opening weekend along with the viability of the studio’s immediate plans.

Turning the clocks back to 2015, and Universal Studios was sitting pretty. The box office haul of Jurassic World exceeded the studios lofty expectations, and the substantial grosses of Furious 7, Minions (collective eye roll), Fifty Shades of Grey (even harder collective eye roll), along with modest hits like Pitch Perfect 2 and Straight Outta Compton resulted in a very successful year for the studio. In fact, 2015 marked the highest grossing year for a single studio in history, with Universal eventually earning $6.9 billion from the earnings of all of its films.

However, just like the impressive box office records for Jurassic World, Universal’s massive success was short-lived. Star Wars: The Force Awakens broke the opening weekend box office record merely six months later, and Walt Disney Studios became the dominant movie studio a year after the best year in Universal’s history. The collective performance of all of its films surpassed $7 billion dollars in 2016, and another successful year in 2017 made Disney the first film to make $6 billion dollars in back to back years. Again, before Universal did it in 2015, no film studio ever made more than $6 billion in a single year, and the fact that rival Disney did it in back to back years nullified Universal’s breakthrough swiftly and decisively.

The future also looks to favor Disney. The company that owns the rights to Marvel Studios, the Star Wars franchise, along with many of the most successful animated films of all time, legally owning your childhood, looks to continue producing successful hits with reliable franchises and intellectual properties, the impressive box office haul of Avenger: Infinity War making up for the “disappointment” of Solo: A Star Wars Story.

Universal, does not have it so easy. While the films released by Illumination (the Minions people) will likely fare well, the interest in the Fast and Furious franchise appears to be waning, and that’s before the tension behind the scenes between Vin Diesel and Dwayne “The Rock” Johnson. Launching the “Dark Universe,” Universal’s attempt at a sprawling cinematic universe by reviving the Universal monsters (Dracula, Frankenstein, etc.) in the same manner as Marvel proved to be an utter disaster both critically and commercially.

Thus, the Jurassic Park franchise appears to be the most reliable property for Universal Studios, likely feeling encircled by the success of Disney as well as forthcoming agreement to purchase Twentieth Century Fox. Though the new Jurassic World will almost certainly be a hit, what Universal wants and arguably needs is for Fallen Kingdom to equal the box office haul of Infinity War, a movie still playing in theaters that has already surpassed $2 billion worldwide. Though the sheer volume of advertisements may seem offensive, it is purely a defensive strategy by Universal Studios to keep pace in the current box office climate.

Guys, Waluigi is just Awful

(Image Credit: Nintendo)

At E3 2018, the annual event where video game studios subject themselves to immediate fan backlash and deliver awkward presentations to investors and shareholders that likely haven’t picked up a video game controller since skipping classes in favor of a house Goldeneye tournament, Nintendo announced the upcoming release of Super Smash Bros. Ultimate, a brand new game in the long running series rather than a port from the less than successful version for the Nintendo Wii U. One of the biggest points of emphasis was the fact that every character from a previous version would be available on the new game. Characters like Mewtwo and Roy which were DLC characters in the Wii U and 3DS version, fan favorite Solid Snake (censored for the innocent among us), and even characters no one particularly wanted back like Ice Climbers and Pichu will be available in the new game, along with new characters like the Inklings from Splatoon, Princess Daisy from the Mario franchise, and Ridley from the Metroid series.

After the initial hype for the game died down, people of the internet took to their keyboards and gathered to complain about a character that was not announced for the new entry in the Smash series. While numerous characters from the Super Mario series, a collection of video games that has grossed more than the GDP of numerous countries, one character was left off the roster for the time being. This, of course, is the purple, altitudinous Waluigi.

First appearing in Mario Tennis for the Nintendo 64 in 2000, Waluigi was to Luigi what Wario was to Mario, an evil twin alter ego. Waluigi has since reappeared in other Mario games and was even a non-playable ally (assist trophy) in preceding Super Smash Bros games. But, the decision not to include Waluigi as a playable character (again for the time being) sparked internet outrage, some protesting ironically (MEMES) while others seriously wish for the ill-tempered irritant to enter the fray in the new game.

Waluigi 2

(Look at him, he just wants to hit his companions repeatedly until they fly off a stage to their death. Credit to a Facebook friend for sharing this meme)

Guys…Waluigi is awful.

I mean…really, he’s really awful.

“Sir, we know that, we just like him for the memes.”

Doesn’t matter. The creation and celebration of Waluigi represents the same kind of “edgy” thinking that prevailed throughout the 1990s and 2000s to create other awful ideas like Venom, The Death of SupermanSpawn, and even Wario (Wario is also awful). Clamoring for a terrible character representing a terrible idea and terrible thinking, even sarcastically, rewards said terrible ideas and terrible thinking and encourages video game creators to continue terrible ideas and terrible thinking until all we are left with is anti-versions of the characters we actually like.

VenomBaneDoomsday

(I hate one-note “dark” and “edgy” villains Kif, it sickens me)

So Nintendo, don’t do it. Don’t put Waluigi in the new Smash game regardless of internet outrage or reducing stock prices due to a lackluster showing at E3. Waluigi is not a good character regardless of what the internet thinks. Ironic cult following or not, “Luigi but evil” represents the worst form of thinking rather than a creative character, a product of 1990s and early 2000s media trends rather than a true representation of Nintendo’s creativity.

 

The Three Kings of ‘Salem’s Lot

(Image Credit Cemetery Dance Publications)

(This article was originally published October, 2017)

Written in 1975, Stephen King’s second novel ‘Salem’s Lot answers the age old question, “What would happen if the story of Dracula occurred in a small town in Maine?” Both Carrie and ‘Salem’s Lot were heavily influenced by Bram Stoker’s Dracula, a favorite of King’s when teaching high school, in tone, style, and subject matter in the latter case (by the way, if you haven’t read the original Dracula, do it, it’s really good). ‘Salem’s Lot centers around Ben Mears, a writer returning to the eponymous town where he spent a portion of his childhood in order to complete his next book. He reminisces about the horror of the Marsten House, meets and falls in love with Susan Norton, and eventually encounters true evil, as vampires decimate the town’s population.

Yet, the most interesting scene in ‘Salem’s Lot doesn’t involve vampires at all, but centers on King deep in self reflection. Many of his works are semi-autobiographical; for example The Body (which was later adapted into the coming of age classic Stand by Me) serves as King reflecting nostalgically on childhood. ‘Salem’s Lot represents another iteration of this trend, with three characters representing the author’s reckoning with his own psyche.

Ben Mears, the writer, arrives at the town bar and meets up with Ed “Weasel” Craig, a boarder renting a room in the same building as Mears who is friendly enough but also a raging alcoholic. “Weasel,” well into a drunken stupor, introduces Ben to schoolteacher Matt Burke before excusing himself to the restroom. Realizing that “Weasel” has been gone for some time, Ben and Matt discover him passed out, “propped against the wall between two urinals, and a fellow in an army uniform was pissing approximately two inches from his right ear” (‘Salem’s Lot, 194).

In Dickensian fashion, this scene depicts King’s past, present, and future. Matt, a schoolteacher, King’s profession before becoming a writer, and Ben, the famous writer and stand-in for the author in this book, look down on the passed out “Weasel” Craig, the possible “future King.” King’s struggle with drug addiction and alcoholism throughout the 1970s and 1980s is explored in greater allegorical detail in The Shining, but this scene serves as a brief “moment of clarity” for King, recognizing his fate if he continued on the path of addiction. As Ben, the “present King” looks down on “Weasel,

“His mouth was open and Ben thought how terribly old he looked, old and ravaged by cold, impersonal forces with no gentle touch in them. The reality of his own dissolution, advancing day by day, came home to him, not for the first time, but with shocking unexpectedness. The pity that welled up in his throat like clear, black waters was as much for himself as it was for Weasel.

Requiem For The Nintendo Wii

 

(Image Credit: Nintendo)

(This article was originally published August, 2017)

So the Nintendo Switch, the console/handheld hybrid from Nintendo, seems to be doing well, with sales surpassing industry expectations. Since its worldwide release in March 2017, Nintendo sold 4.7 million units of the new system, leading to higher stock prices due to investor confidence and an increase in revenue of $1.38 billion from April 1 to June 30. As a system, in the 30 seconds I got to experiment with it, it seems to be fine, deftly transferring the graphics and computing power of a console to a handheld but having cumbersome control scheme that takes some getting used to. The success of the Nintendo Switch is particularly interesting for its role in the narrative of Nintendo as a gaming company; some say it serves as the last chance for Nintendo to prove it can make a successful system, otherwise the company would have to resort to selling off its characters and brands to mobile developers and third parties in the same manner as Sega.

Common narratives of Nintendo’s history as a video game company cite the original NES as an icon of the Golden Age of video games, with the SNES and N64 as worthy follow-ups before a decline in console and game quality in the twenty-first century. Because of its failure to attract third-party developers and match the computing power of its rivals Sony and Microsoft, Nintendo needed to adapt to modern trends in video game sales and development, or else become a dinosaur stuck celebrating the glory of the past while failing to adapt for the present and future.

Yet, this narrative ignores or dismisses the Nintendo Wii, considering the 2006 console as an example of Nintendo’s faulty corporate strategy or a gimmick that made Nintendo a lot of money initially but hurt the company in the long run, as the casual fan base that made the system so popular ditched the Wii for mobile gaming, leaving Nintendo with its core fan base and a system of lower quality than the PS3 or Xbox 360. Rather than being a failure, I argue that the Nintendo Wii was a resounding success, a console vastly superior in sales to Sony and Microsoft that still produced quality games and led to a lasting impact on subsequent game development and gaming culture as a whole.

Sales

While the PS3 and Xbox 360 sold at an even clip compared to each other, with both systems selling about 84 million units according to company sales reports, the Nintendo Wii sold about 102 million units, surpassing its rivals by a significant margin. The curiosity of new motion controls and loyalism to the Nintendo brand led the Wii to outperform the Sony and Microsoft in sheer sales, a point often overlooked or in narratives of the success of seventh generation video game consoles or disregarded as consumers buying into a fad. Moreover, games released for the Wii vastly outsold games released for the Xbox 360 or the PS3, as New Super Mario Bros. Wii, Wii Sports Resort, Mario Kart Wii, and Wii Sports are in the top 10 video games of all time. These games benefited as bundled copies, but other games like Wii Play, Wii Fit, Super Smash Bros. Brawl, and Super Mario Galaxy rank near the top among top-selling games of the seventh generation. Though remembered as better computing systems, the Xbox 360 and PS3 simply didn’t sell as well as the Nintendo Wii and its best-selling games. The Nintendo Wii mixed nostalgia with novelty, appealing to children and their families to create a unique gaming experience that should not be remembered as a failure.

Games

Sure, games selling well is one method of calculating success, but how good were these games? After all, the Transformers movies are a financial success, but no one thinks those movies are any good. While the motion control system and lack of computing power scared away third-party developers and titles that were adapted to the Nintendo system suffered in quality as compared to their PS3 and Xbox 360 counterparts (for example, Madden and Call of Duty), Nintendo managed to produce its own classic games, reusing characters and settings while simultaneously moving to new directions. Titles like Legend of Zelda: Twilight PrincessLegend of Zelda Skyward SwordSuper Smash Bros. BrawlSuper Mario Galaxy rank among the best in their respective franchises, while Nintendo capitalized on brand nostalgia by releasing a new version of Punch Out in 2009 as well as Donkey Kong Country Returns 2010. New Super Mario Bros. WiiMario Kart Wii, and Mario Party became local multiplayer favorites for families and dorm rooms, while games like Xenoblade Chronicles showed that Nintendo could still develop new characters and franchises.

The narrative of a supposed fall from grace and relevance among video game production cities Nintendo’s inability to develop a successful online multiplayer system and failure to attract third-party developers as examples of impending doom for the video game company. After all, how could Nintendo survive as a gaming company while only relying on Nintendo games and characters? However, Nintendo’s reliance on its own franchises and characters is nothing new, it is the same model that made the company the dominant force in video game production for decades. Nintendo boasts a greater stable of popular franchises than either Sony or Microsoft with Super MarioLegend of ZeldaPokemonAnimal Crossing, and Metroid among others, so why would the company rely on inferior Wii versions third-party franchises like Call of Duty, Battlefield, or Grand Theft Auto? Moreover, as Sony and Microsoft battle for a gaming audience largely composed of young males in their teens and twenties, the Nintendo Wii appealed to younger audiences and families, cornering a valuable demographic ignored by Nintendo’s gaming rivals. Rather than copying the tactics of Sony and Microsoft, Nintendo recognized its strengths and weaknesses, focusing on its own characters and franchises to appeal to a different consumer base and producing quality games for the computing power of the Wii as opposed to attempting to attract third-party developers.

Lasting Impact

The most obvious impact of the Nintendo Wii on gaming as a whole is the development of motion control. The initial popularity of the Nintendo Wii led Microsoft and Sony to experiment and produce their own iterations of the concept, resulting in the Xbox Kinect and the Playstation Move. Even the Playstation VR represents an evolution of the Wii’s interactive concept; a change to the gaming experience as a whole rather than simply a new system with better graphics and computing power. Sure, the Wii’s motion controls weren’t perfect and the control scheme led to awkward game play for certain titles, but the Wii was the first major step toward a dramatic change in gaming rather than the culmination of motion control gaming; nothing is perfect the first time. The Nintendo Wii attempted to bring the future to the present, to change the gaming experience by fusing exercise and interaction with video game play.

The promise of more active and interactive video gaming, especially the promise that the Nintendo Wii was a “healthier video game console,” led to curiosity and purchase by casual gamers along with Nintendo fans, boosting the console’s sales as compared to the Xbox 360 and the PS3. Yet, the doom and gloom teleology of the Wii and Nintendo argue that the motion controls were only a gimmick, as casual gamers ditched the console in favor of mobile gaming and its easy accessibility, leaving only a limited Nintendo fan base and a gaming system inferior to Sony and Microsoft. Yet, I believe that it is precisely because of the Wii’s appeal to families and atypical consumers that led to the popularity of mobile gaming we see today. Nintendo became the dominant company in video game production and development after the video game crash of 1983 by marketing specifically to boys in toy aisles, and console producers continue to target young males as their primary demographic. The Nintendo Wii, by contrast appealed to younger audiences as well as their parents, offering nostalgia for those that grew up in the 1980s as well as casual, easily accessible game play. Rather than a simple shift by casual gamers to mobile games because of easier access and short attention spans, the Nintendo Wii fundamentally influenced gaming culture. Video games were no longer only toys for children, as previously untapped audiences adopted casual gaming as a fun diversion while waiting for transportation or avoiding awkward silence in social settings. Games like Candy Crush and Clash of Clans, easier to pick up and play and more popular with adults (especially adult women) than console games, owe their rise to the Nintendo Wii, which made gaming accessible and fun for people of all ages.

Far from being a small blip of success in Nintendo’s downward trajectory or emblematic of Nintendo’s archaic game strategy, the Nintendo Wii was a successful console based on sales, games, and lasting impact on gaming culture. Though the console certainly had limitations and the motion controls scared away third-party developers, Nintendo thrived because of the Wii. Combining innovation with nostalgia and recognizing the strength of internal characters and franchises, the Nintendo Wii was not only a successful platform upon its release but a development that influences game development and culture today and into the future.

The Journey Begins

Thanks for joining me as I begin blogging, writing, haphazardly throwing streams of consciousness onto the internet. On this site, you’ll find commentary and analysis of movies, video games, television, and pop culture in general. Some of the stuff here has been written (by me, relax) in the past, but it is my hope to provide new content and thoughtful pieces going forward. Thank you for stopping by!