Thursday, April 22, 2010

Produsers, Difficulty Sliders, and 2KShare

As a lifelong fan, I've spent far too much money on lousy baseball video games in the past. For some reason, I have yet to find a game for this sport that I love so much that manages to embody baseball in the way I imagine it. However, I do have some hopes with the recent rise of Sony's MLB: The Show series (though I don't have a PS3, so I can't play it). This year, the main baseball video game producer, Take 2, decided to basically copy Sony's take, a move that, honestly, greatly improved their game. Nevertheless, on initial play, I still found the game to be lacking.
Generally when I play a baseball video game, I want the game to mimic the sport that I watch on television regularly. While Madden, Fifa, and NBA 2K have all managed to at least cause a suspension of disbelief for me, all of the baseball games I've played have failed in this aspect. Certain key aspects of the game haven't been incorporated (presumably in favor of an abstracted game play that the designers decided was more fun), while other aspects that should be abstracted are left realistic, thus making the game not enjoyable. Examples are in order.
For the first situation, I've found that baseball games have a great deal of trouble with foul balls. For a pitcher to throw a complete game in real life baseball, he usually has to throw around 120 pitches. By reducing the number of foul balls in the game, the programmers trouble the pitcher stamina mechanic, causing either a pitcher who gets exhausted in the sixth inning, after only throwing 60 pitches, or a pitcher who can plug through a complete game every time, because he only throws 90 pitches a game. Both situations disrupt the "realism" of the game, and lead to potentially problematic playability issues (i.e. most baseball fans know to pull a pitcher when they get to around 100 pitches. If the pitcher gets exhausted after 60, the player feels cheated. If the pitcher can continue to 150, you might as well do without the stamina mechanic).
On the other side of this issue, baseball games have a nasty habit of leaving certain extremely difficult aspects of baseball nonabstracted in the game. The particular mechanic that bothers me is the ability for batters to identify pitches that are in and out of the strike zone. Most players, we can assume, do not have the reaction time to be able to read a 95 mph fastball as it leaves the pitcher's hand and approaches the plate. Recognizing this, the game designers have tried a variety of abstracting mechanics. Many have tried to slow down the pitches to the point that the player can see the pitch. The problem with this is that it falls in the uncanny valley of mimesis: The game is trying to appear real by not abstracting the batter's eye mechanic, but doesn't appear real in that the pitches move far too slowly. A more acceptable approach would be to wholly abstract the mechanic, as The Show did in previous years. In The Show, the batter was given the opportunity to guess which part of the plate the pitcher was going to throw to (e.g. low and inside, up and outside, etc.). If you guess correct, you can see where the pitch is going before it gets there, and whether it's in or out of the strike zone. By including this highly abstract mechanic, the player can easily run up pitch counts, draw walks, etc. Still, some players have strongly resisted this mechanic, in that they feel it makes the game unrealistic, and potentially too easy.
What we find in both of these situations is a tension between reproducing the sporting event, i.e. a baseball game, as it would be seen on television and creating a fun and properly challenging game experience. Since each player approaches these games differently, sports games have found a reasonable solution in the development of "difficulty sliders." These sliders allow you to tweak the mechanics of the game to fit your desires: For example, as I mentioned above, I feel that baseball games don't allow enough foul balls. Ideally, any baseball game would include a slider that could determine the likelihood of batter contact, and another determining the likelihood of solid contact, and by setting batter contact high but solid contact low, I could produce a game that allows lots of foul balls, but not too many solid hits.
The problem with the slider solution is that these sort of mechanic manipulations are exceedingly complicated, and require excessive testing. Most players don't want to have to play dozens of games, making minor tweaks of the sliders, before they can finally produce a game that matches their vision of what the game should be. Thus, with the development of sliders, a secondary community has formed on message boards online (particularly operationsports.com), where those players who do enjoy developing sliders can post their most realistic sliders, as well as discussion with other players who help them test their designs. By posting on the internet, the slider developer can rely on other players to help test, and thus speed up the time needed to produce a valid set of sliders.
Surprisingly, 2k has widely endorsed this fan activity, in fact creating a service called 2kShare for users to easily exchange sliders. This function is embedded in the game itself, allowing a user with internet access to immediately download and implement other sliders, rosters, etc., without having to sit around manually adjusting what might be dozens of fields. Ironically, this service is basically 2k accepting that they are incapable of successfully accounting for this aspect of the game, and therefore relying on the fan base to do it for them. Instead, 2k takes the role of a world-maker, a producer of a system that can be adjusted to fill the expectations of a wide range of different fans. As the game updates each year, it is less important for 2k to produce a balanced game then to produce a set of sliders that account for every aspect of the game that a fan might want to adjust. We can't accuse the designers of laziness in relying on produsers to create difficulty settings for them; in fact, they aptly recognized that the game designer paradigm itself doesn't allow for successful difficulty balance! By recognizing that their fans are not a unitary group with single desires, 2k is moving in a direction that could eventually produce the most accessible and successful sports games ever made.

Wednesday, April 21, 2010

Totally irrelevant to the purpose of this blog

This is a short essay I wrote for a class on Victorianism and the law. I couldn't think of anywhere else to put it, even though it has little to do with the general purposes of this blog. Actually, though, looking towards the end of the essay, I begin discussing my general attempts to reclaim laziness. Maybe it's more relevant than I thought. (P.S. I haven't even proofread this, so if it's a bit stumbly, that's why. After I do an edit, I'll repost).

Isabel Archer’s Library and the Foundation of Idleness

“The foundation of her knowledge was really laid in the idleness of her grandmother’s house, where, as most of the other inmates were not reading people, she had uncontrolled use of a library full of books with frontispieces, which she used to climb upon a chair to take down” (Penguin 78).

If I were to follow directly in the footsteps of Gaston Bachelard, I would subject the entirety of Isabel Archer’s grandmother’s house to a close and careful phenomenological analysis as a route to discovering the deeper connection between this “first universe” (Bachelard 4) and Isabel’s eventual fate. Due to the length of this assignment, however, I will restrain myself to just these few words mentioning Isabel’s early days; within these few, brief images of Isabel’s life at her grandmother’s house, we find the roots of Isabel’s character. By referencing Isabel’s moments in her grandmother’s library as foundational, The Portrait of a Lady finds itself in the long heritage of self-loathing novels, which find the true origin of character fault to be found in fiction.

Bachelard argues in his seminal The Poetics of Space that “the house shelters daydreaming, the house protects the dreamer, the house allows one to dream in peace” (Bachelard 6). He continues, arguing that the various pieces of the house come to represent key locations within our mind; for example, the attic is a place of whimsy and rational imagination. The cellar, on the other hand, is the “dark entity of the house, the one that partakes of subterranean forces” (Bachelard 18). From Jung, Bachelard determines the attic as the representative of the super-ego and the cellar as the poetic image of the id. Thus, when we hear motions in the attic, we want to explore, to comprehend, to understand, yet when we hear rumblings in the basement, we cover our ears and pray that whatever is down there will just disappear.

Thus, the statement that the “foundation” of Isabel’s knowledge was laid in the library is particularly odd. In the first place, foundations are stable and dark, the cement that keeps the house standing, but also the walls of the frightening cellar; the library serves as a support for Isabel’s mind, yet also contains the rooting causes of her actions. A library is a troubling construct as is, when we confront it using Bachelard’s phenomenology. The library is supposedly the store of knowledge for a family, a place where heritage, thought, rationality, and blood are sustained for future generations. Yet, this library is abandoned by the other “inmates” of the house, serving as a mere superficial representation of those previously mentioned qualities. Like the frontispieces of the books that Isabel leafs through, the library pretends to reveal in its very nature what is contained within; yet, just as looking at a frontispiece ultimately reduces the complexity of the novel down to mere stereotype, the image of the library states knowledge, civilization, and class as mere echoes of the true, eidetic nature of such concepts.

Upon closer examination, however, we notice that it is not the library itself that serves as the foundation of Isabel’s knowledge: it is her idleness. Thus, the basement of Isabel’s mind, the location of her basic values, is represented as a cardinal sin. The library, which is supposed to (however hollowly) represent those classic values of knowledge and heritage, in fact contains laziness and self-involvement. Thus, Isabel is instantly attracted to the European elite and their way of life; she ties those values that the library represents in the collective mind with the lazy idles of youth. Of course, Henrietta’s obnoxious American work ethic and demands for self-sacrifice are justified by Isabel’s troubles in Europe. Yet, the negativity implicit in the statement of youthful idles is bound within the Puritanical work ethic that Henrietta views as critical to the American character. However, we, as readers, may want to look to Bachelard, Heidegger, and the other continental philosophers who attempted to reclaim idles as a place of value. Or, as Bachelard says, “Thought and experience are not the only things that sanction human values. The values that belong to daydreaming mark humanity in its depths” (Bachelard 6).

Monday, April 19, 2010

O'Reilly's "Web Squared" and the Creation of Value in an Information Technology

Tim O’Reilly’s discussions of Web 2.0 can easily be dismissed as corporate attempts to claim control over the information democracy arising on the internet. Yet, within his corporatize, we find the complexities of what we consider valuable in the information age, both commodity and metric. The complex relationship between agent and labor is changing as more and more of our life is wired. Ultimately, by shifting the location of value from commodity to labor, the information economy justifies such fan practices as piracy and reappropriation.

In the era of Web 2.0, both humans and technology have assumed the role of agent, in relation to data. As data-agents, we harvest data, as well as organize and analyze it. Take, for example, a crowdsourcing research approach to Twitter. The initial data-set is created by a large number of human agents, each providing their own perspective on the issue. Subsequently, a computer scans these millions of self-standing data-pieces into a cogent trend, which can then be sold to a marketing firm. Finally, a human agent bends this large data-set into a series of profitable statements, which can then be converted into something of value, i.e. an advertising campaign. When we discuss O’Reilly’s vaunted “collective intelligence,” we tend to privilege the first agents alone, the data-providers. Yet, the digital agent, the data-analysis software, serves as an equally important component of the larger system, and thus functions on an equal plane with the human data-providers, as well as the human data-receivers.

While some might argue that this shift dehumanizes people, we could equally claim that data-culture encourages a broader perspective of who (and what) can be considered an agent. O’Reilly pays lip-service to concerns about dehumanization in his second article, but ultimately dismisses such problems with banal claims concerning expanded communication and shared identities: “There are many who worry about the dehumanizing effect of technology. We share that worry, but also see the counter-trend, that communication binds us together, gives us shared context, and ultimately shared identity” (web2summit.com 9). By heralding the wonder of communication, O’Reilly distracts from the central concern of dehumanization. Web 2.0 is not about communication, it is about data-processing; Web 1.0 was about communication. The sort of collaboration encouraged by Web 2.0 is not person to person, but person to technology. In fact, O’Reilly points to the elevation of the technological as agent when he declares humans the “partner” of sensory equipment (web2summit.com 8). Ultimately, if we choose to consider ourselves part of a larger collective intelligence, then we must choose to recognize our partners in such endeavors as our equals, and not our servants.

In effect, the labor of the new economy lies in data-collection and data-analysis; the capital is the hard data itself. O’Reilly asserts that “the era of Web 2.0, therefore, [is] a race to acquire and control data assets” (web2summit.com 3). Yet, he stresses that some data-analysis systems should not be monopolized, for such tactics prevent innovation (ibid). This is an extremely complex argument concerning the nature of capital in the new economy. For O’Reilly, the larger data set is the only thing that the corporation can ethically control; even the algorithms used to collect that data should be available for all to see, and potentially use. The Web 2.0 corporation need not worry about somebody else “stealing” its capability to produce data, because that capability is not merely the labor of the machine; it is the joint labor of the machine and the large base of user/data-providers. Google, for example, could share their search algorithm with the world, without a worry that somebody else could detract from their profits, because the algorithm requires the gigantic database developed by billions of searches, a database wholly owned by Google.

In the new economy, value is produced entirely out of labor, with little attention to capital itself. This is not to suggest that Google’s database, a massive conglomeration of capital, is of little value; in fact, it is one of the most valuable assets in the world. But, the value of the database does not rely on the specific data-pieces contained by the database, but instead in the labor-process of accruing such a vast collection of data. Hence, Google’s database is more valuable to marketing firms than Bing’s, because Google’s database is larger, and thus produces more effective and complete information. We must be careful, though, to distinguish between the value that Web 2.0 services provide for users versus the value they provide for end-capitalists. The Web 2.0 company must provide a dual service: they must provide complex databases that they can sell to the end-capitalists, such as marketing firms, etc.; they must also provide innovative and useful services to the front-end data-producers, to keep them providing data. Thus, even though Google appears to have a dominant grasp of the search engine industry, they could potentially lose out, if another data-analysis agent could provide a more innovative and interesting way for the users to provide data. Again, we see that the total value provided by the Web 2.0 company lies in the data-collection and data-analysis labor that they provide, not the data itself.

Because the value produced by the Web 2.0 company lies in the collection and analysis of huge sets of data, each piece of raw data retains a value that cannot be ethically collected. Take, for example, MySpace’s controversial attempt to lay claim to any and all creative material that is posted on their website. While general consensus has not condemned Google’s collection of search data, for the value they produce is in their methods, MySpace violated the user trust, for they created value in the traditional sense: they exploited labor into creating a product that they can sell at a grossly inflated price. Data collection and analysis systems are not inherently exploitive of data-producers, because they provide a necessary service that the labor cannot in any way provide on their own. The database itself is what holds value, not the solitary data-pieces. Thus, a song posted on YouTube remains the possession of the person who produced it; even the specific data for that song remains public domain (thus, the easily visible hit-recorder). All YouTube can claim from the specific data-piece is the right to incorporate it into their larger database and analysis system. The production of data, be it a single search, a music video on YouTube, or a social network on Facebook, derives its value from the labor of production.

Thus we see that the rising problematic nature of fan-appropriation has its roots in a changing notion of how value is produced in the new economy. Media companies generally consider themselves old-economy stand-bys. They exploit artistic production to produce exchange value in an art commodity; for example, a record label might claim the rights to all the songs produced by a given artist. In such a paradigm, when a fan covers that song, the copyright holder can claim that said fan has stolen their product and detracted from its value. In the Web 2.0 paradigm that O’Reilly presents, that fan has the right to reproduce the song, for he is contributing labor to the market and producing knowledge of said song. Thus we see the argument that media companies should become promoters rather than copyright holders, and thus continue to profit despite rampant piracy on the internet. In the new economy, piracy is a value-producing activity, which will allow data-collectors to produce valuable database commodities, as well as provide the copyright holder with a greater public awareness, which can be converted into a saleable, labor-intensive product, such as a concert. By attempting to retain their position in the old economy, media conglomerates are quickly forcing themselves into irrelevance.

Anyone, though, who would claim that old economy-style production will fade away, ignores the continuing relevance of the commodity economy in certain industries, such as food and energy production. Certainly, awareness of a certain agriculture company will not provide them with the value that they deserve for producing their food; similarly, pharmaceutical companies do not gain by allowing other companies to produce the drugs that they spent huge amounts of money developing. The information economy is relevant only for certain industries, and thus cannot be seen as universal; we cannot demand that non-information technology companies play by the same rules as the information industry. Thus, sneaking into a movie theater, and thus stealing the product of a non-information company (the theater) can be considered morally wrong, while downloading the film onto your computer, and gaining access to information belonging to an information company (the producers/distributors), remains moral.

The problem with our current approach to the misappropriation of value by consumers comes when we realize that few to no regulations have been developed to distinguish between the information economy and the classical, commodity economy. Some large corporations, like Microsoft, try to have it both ways, keeping their source-code private, as if the software were a commodity, but also demanding that other companies function on their standards, as if their product was an information technology that gains value with more users. If we consider fan-practices to be the precursor to modern technological practices, we see that users are willing to trade their personal data for access; tech companies that attempt to drown out their user base run the risk of alienating the very thing that makes them strong.