The Digital Antiquarian (2024)

Shades of Gray

14Sep

Ladies and gentlemen, come and see. This isn’t a country here but an epic failure factory, an excuse for a place, a weed lot, an abyss for tightrope walkers, blindman’s bluff for the sightless saddled with delusions of grandeur, proud mountains reduced to dust dumped in big helpings into the cruciform maws of sick children who crouch waiting in the hope of insane epiphanies, behaving badly and swamped besides, bogged down in their devil’s quagmires. Our history is a corset, a stifling cell, a great searing fire.

— Lyonel Trouillot

What’s to be done about Haiti?

Generations have asked that question about the first and most intractable poster child for postcolonial despair, the poorest country in North or South America now and seemingly forever, a place whose corruption and futility manages to make the oft-troubled countries around it look like models of good governance. Nowhere feels James Joyce’s description of history as “a nightmare from which I am trying to awake” more apt. Indeed, Haiti stands as perhaps the ultimate counterargument to the idealistic theory of history as progress. Here history really is just one damned thing after another — differing slightly in the details, but always the same at bottom.

But why should it be this way? What has been so perplexing and infuriating about Haiti for so long is that there seems to be no real reason for its constant suffering. Long ago, when it was still a French colony, it was known as the “Pearl of the Caribbean,” and was not only beautiful but rich; at the time of the American Revolution, it was richer than any one of the thirteen British American colonies.Those few who bother to visit Haiti today still call it one of the most beautiful places of all in the beautiful region that is the Caribbean. Today the Dominican Republic, the nation with which Haiti shares the island of Hispaniola, is booming, the most popular tourist spot in the Caribbean, with the fastest-growing economy anywhere in North or South America. But Haiti, despite being blessed with all the same geographic advantages, languishes in poverty next door, seething with resentment over its condition. It’s as if the people of Haiti have been cursed by one of the voodoo gods to which some of them still pray to act out an eternal farce of chaos, despair, and senseless violence.

Some scenes from the life of Haiti…

…you are a proud Mandingue hunter in a hot West African land. But you’re not hunting. You’re being hunted — by slavers, both black and white. You run, and run, and run, until your lungs are near to bursting. But it’s no use. You’re captured and chained like an animal, and thrust into the dank hold of a sailing ship. Hundreds of your countrymen and women are here — hungry, thirsty, some beaten and maimed by your captors. All are terrified for themselves and their families, from whom they’ve been cruelly separated. Many die on the long voyage. But when it’s over, you wonder if perhaps they were the lucky ones…

The recorded history of the island of Hispaniola begins with the obliteration of the people who had always lived there. The Spanish conquistadors arrived on the island in the fifteenth century, bringing with them diseases against which the native population, known as the Taíno, had no resistance, along with a brutal regime of forced labor. Within two generations, the Taíno were no more. They left behind only a handful of words which entered the European vocabulary, like “hammock,” “hurricane,” “savanna,” “canoe,” “barbecue,” and “tobacco.” The Spanish, having lost their labor force, shrugged their shoulders and largely abandoned Hispaniola.

But in the ensuing centuries, Europeans developed a taste for sugar, which could be produced in large quantities only in the form of sugarcane, which in turn grew well only in tropical climates like those of the Caribbean. Thus the abandoned island of Hispaniola began to have value again. The French took possession of the western third of the island — the part known as Haiti today — with the Treaty of Ryswick, which ended the Nine Years War in 1697. France officially incorporated its new colony of Saint-Domingue on Hispaniola the same year.

Growing sugarcane demanded backbreaking labor under the hot tropical sun, work of a kind judged unsuitable for any white man. And so, with no more native population to enslave, the French began to import slaves from Africa. Their labor turned Saint-Domingue in a matter of a few decades from a backwater into one of the jewels of France’s overseas empire. In 1790, the year of the colony’s peak, 48,000 slaves were imported to join the 500,000 who were already there. It was necessary to import slaves in such huge numbers just to maintain the population in light of the appalling death toll of those working in the fields; little Saint-Domingue alone imported more slaves over the course of its history than the entirety of the eventual United States.

…you’re a slave, toiling ceaselessly in a Haitian cane field for your French masters. While they live bloated with wealth, you and your fellows know little but hardship and pain. Brandings, floggings, rape, and killing are everyday events. And for the slightest infraction, a man could be tortured to death by means limited only by his owners’ dark imaginations. What little comfort you find is in the company of other slaves, who, at great risk to themselves, try to keep the traditions of your lost homeland alive. And there is hope — some of your brothers could not be broken, and have fled to the hills to live free. These men, the Maroons, are said to be training as warriors, and planning for your people’s revenge. Tonight, you think, under cover of darkness, you will slip away to join them…

The white masters of Saint-Domingue, who constituted just 10 percent of the colony’s population, lived in terror of the other 90 percent, and this fear contributed to the brutality with which they punished the slightest sign of recalcitrance on the part of their slaves. Further augmenting their fears of the black Other was the slaves’ foreboding religion of voodoo: a blending of the animistic cults they had brought with them from tribal Africa with the more mystical elements of Catholicism — all charms and curses, potions and spells, trailing behind it persistent rumors of human sacrifice.

Even very early in the eighteenth century, some slaves managed to escape into the wilderness of Hispaniola, where they formed small communities that the white men found impossible to dislodge. Organized resistance, however, took a long time to develop.

Legend has it that the series of events which would result in an independent nation on the western third of Hispaniola began on the night of August 21, 1791, when a group of slave leaders secretly gathered at a hounfour — a voodoo temple — just outside the prosperous settlement of Cap‑Français. Word of the French Revolution had reached the slaves, and, with mainland France in chaos, the time seemed right to strike here in the hinterlands of the empire. A priestess slit the throat of a sacrificial pig, and the head priest said that the look and taste of the pig’s blood indicated that Ogun and Ghede, the gods of war and death respectively, wanted the slaves to rise up. Together the leaders drank the blood under a sky that suddenly broke into storm, then sneaked back onto their individual plantations at dawn to foment revolution.

That, anyway, is the legend. There’s good reason to doubt whether the hounfour actually happened, but the revolution certainly did.

…you are in the middle of a bloody revolution. You are a Maroon, an ex-slave, fighting in the only successful slave revolt in history. You have only the most meager weapons, but you and your comrades are fighting for your very lives. There is death and destruction all around you. Once-great plantation houses lie in smouldering ruins. Corpses, black and white, litter the cane fields. Ghede walks among them, smiling and nodding at his rich harvest. He sees you and waves cheerfully…

The proudest period of Haiti’s history — the one occasion on which Haiti actually won something — began before a nation of that name existed, when the slaves of Saint-Domingue rose up against their masters, killing or driving them off their plantations. After the French were dispensed with, the ex-slaves continued to hold their ground against Spanish and English invaders who, concerned about what an example like this could mean for other colonies, tried to bring them to heel.

In 1798, a well-educated, wily former slave named Toussaint Louverture consolidated control of the now-former French colony. He spoke both to his own people and to outsiders using the language of the Enlightenment, drawing from the American Declaration of Independence and the French Declaration of the Rights of Man and the Citizen, putting a whole new face on this bloody revolution that had supposedly been born at a voodoo houfour on a hot jungle night.

Toussaint Louverture was frequently called the black George Washington in light of the statesmanlike role he played for his people. He certainly looked the part. Would Haiti’s history have been better had he lived longer? We can only speculate.

…and you are battling Napoleon’s armies, Europe’s finest, sent to retake the jewel of the French empire. You have few resources, but you fight with extraordinary courage. Within two years, sixty thousand veteran French troops have died, and your land is yours again. The French belong to Ghede, who salutes you with a smirk…

Napoleon had now come to power in France, and was determined to reassert control over his country’s old empire even as he set about conquering a new one. In 1802, he sent an army to retake the colony of Saint-Domingue. Toussaint Louverture was tricked, captured, and shipped to France, where he soon died in a prison cell. But his comrades in arms, helped along by a fortuitous outbreak of yellow fever among the French forces and by a British naval blockade stemming from the wars back in Europe, defeated Napoleon’s finest definitively in November of 1803. The world had little choice but to recognize the former colony of Saint-Domingue as a predominately black independent nation-state, the first of its type.

With Louverture dead, however, there was no one to curb the vengeful instincts of the former slaves who had defeated the French after such a long, hard struggle. It was perfectly reasonable that the new nation would take for its name Haiti — the island of Hispaniola’s name in the now-dead Taíno language — rather than the French appellation of Saint-Domingue. Less reasonable were the words of independent Haiti’s first leader, and first in its long line of dictators, Jean-Jacques Dessalines, who said that “we should use the skin of a white man as a parchment, his skull for an inkwell, his blood for ink, and a bayonet for a pen.” True to his words, he proceeded to carry out systematic genocide on the remaining white population of Haiti, destroying in the process all of the goodwill that had accrued to the new country among progressives and abolitionists in the wider world. His vengeance cost Haiti both much foreign investment that might otherwise have been coming its way and the valuable contribution the more educated remaining white population, by no means all of whom had been opposed to the former slaves’ cause, might have been able to make to its economy. A precedent had been established which holds to this day: of Haiti being its own worst enemy, over and over again.

…a hundred years of stagnation and instability flash by your eyes. As your nation’s economic health declines, your countrymen’s thirst for coups d’etat grows. Seventeen of twenty-four presidents are overthrown by guile or force of arms, and Ghede’s ghastly armies swell…

So, Haiti, having failed from the outset to live up to the role many had dreamed of casting it in as the first enlightened black republic, remained poor and inconsequential, mired in corruption and violence, as its story devolved from its one shining moment of glory into the cruel farce it remains to this day. The arguable lowlight of Haiti’s nineteenth century was the reign of one Faustin Soulouque, who had himself crowned Emperor Faustin I — emperor of what? — in 1849. American and European cartoonists had a field day with the pomp and circ*mstance of Faustin’s “court.” He was finally exiled to Jamaica in 1859, after he had tried and failed to invade the Dominican Republic (an emperor has to start somewhere, right?), extorted money from the few well-to-do members of Haitian society and defaulted on his country’s foreign debt in order to finance his palace, and finally gotten himself overthrown by a disgruntled army officer. Like the vast majority of Haiti’s leaders down through the years, he left his country in even worse shape than he found it.

Haiti’s Emperor Faustin I was a hit with the middle-brow reading public in the United States and Europe.

…you are a student, protesting the years-long American occupation of your country. They came, they said, to thwart Kaiser Wilhelm’s designs on the Caribbean, and to help the Haitian people. But their callous rule soon became morally and politically bankrupt. Chuckling, Ghede hands you a stone and you throw it. The uprising that will drive the invaders out has begun…

In 1915, Haiti was in the midst of one of its periodic paroxysms of violence. Jean Vilbrun Guillaume Sam, the country’s sixth president in the last four years, had managed to hold the office for just five months when he was dragged out of the presidential palace into the street and torn limb from limb by a mob. The American ambassador to Haiti, feeling that the country had descended into a state of complete anarchy that could spread across the Caribbean, pleaded with President Woodrow Wilson to intervene. Fearing that Germany and its allies might exploit this chaos on the United States’s doorstep if and when his own country should enter the First World War on the opposing side, Wilson agreed. On July 28, 1915, a small force of American sailors occupied the Haitian capital of Port-au-Prince almost without firing a shot — a far cry from Haiti’s proud struggle for independence against the French. Haiti was suddenly a colony again, although its new colonizers did promise that the occupation was temporary. It was to last just long enough to set the country on its feet and put a sound system of government in place.

When the Americans arrived in Haiti, they found its people’s lives not all that much different from the way they had lived at the time of Toussaint Louverture. Here we see the capital city of Port-au-Prince, the most “developed” place in the country.

The American occupation wound up lasting nineteen years, during which the occupiers did much practical good in Haiti. They paved more than a thousand miles of roadway; built bridges and railway lines and airports and canals; erected power stations and radio stations, schools and hospitals. Yet, infected with the racist attitudes toward their charges that were all too typical of the time, they failed at the less concrete tasks of instilling a respect for democracy and the rule of law. They preferred to make all the rules themselves by autocratic decree, giving actual Haitians only a token say in goings-on in their country. This prompted understandable anger and a sort of sullen, passive resistance among Haitians to all of the American efforts at reform, occasionally flaring up into vandalism and minor acts of terrorism. When the Americans, feeling unappreciated and generally hard-done-by, left Haiti in 1934, it didn’t take the country long to fall back into the old ways. Within four years President Sténo Vincent had declared himself dictator for life. But he was hardly the only waxing power in Haitian politics.

…a tall, ruggedly handsome black man with an engaging smile.

He is speaking to an assembled throng in a poverty-stricken city neighborhood. He tells moving stories about his experiences as a teacher, journalist, and civil servant. You admire both his skillful use of French and Creole, and his straightforward ideas about government. With eloquence and obvious sincerity, he speaks of freedom, justice and opportunity for all, regardless of class or color. His trenchant, biting criticisms of the establishment delight the crowd of longshoremen and laborers.

“Latin America and the Caribbean already have too many dictators,” he says. “It is time for a truly democratic government in Haiti.” The crowd roars out its approval…

The aspect of Haitian culture which had always baffled the Americans the most was the fact that this country whose population was 99.9 percent black was nevertheless riven by racism as pronounced as anywhere in the world. The traditional ruling class was the mulattoes: Haitians who could credit their lighter skin to white blood dating back to the old days of colonization, and/or to the fact that they and their ancestors hadn’t spent long years laboring in the sun. They made up perhaps 10 percent of the population, and spoke and governed in French. The rest of the population was made up of the noir Haitians: the darker-skinned people who constituted the working class. They spoke only the Haitian Creole dialect for the most part, and thus literally couldn’t understand most of what their country’s leaders said. In the past, it had been the mulattoes who killed one another to determine who ruled Haiti, while the noir Haitians just tried to stay out of the way.

In the 1940s, however, other leaders came forward to advance the cause of the “black” majority of the population; these leaders became known as the noiristes. Among the most prominent of them was Daniel Fignolé, a dark-skinned Haitian born, like most of his compatriots, into extreme poverty in 1913. Unlike most of them, he managed to educate himself by dint of sheer hard work, became political at the sight of the rampant injustice and corruption all around him, and came to be known as the “Moses of Port-au-Prince” for the fanatical loyalty he commanded among the stevedores, factory workers, and other unskilled laborers in and around the capital. Fignolé emphasized again and again that he was not a Marxist — an ideology that had been embraced by some of the mulattoes and was thus out of bounds for any good noiriste. Yet he did appropriate the Marxist language of proletariat and bourgeoisie, and left no doubt which side of that divide he was fighting for. For years, he remained an agitating force in Haitian politics without ever quite breaking through to real power. Then came the tumultuous year of 1957.

Daniel Fignolé, the great noiriste advocate for the working classes of Haiti.

…but you’re now a longshoreman in Port-au-Prince, and your beloved Daniel Fignolé has been ousted after just nineteen days as Provisional President. Rumors abound that he has been executed by Duvalier and his thugs. You’re taking part in a peaceful, if noisy, demonstration demanding his return. Suddenly, you’re facing government tanks and troops. Ghede rides on the lead tank, laughing and clapping his hands in delight. You shout your defiance and pitch a rock at the tank. The troops open fire, and machine-gun bullets rip through your chest…

One Paul Magloire, better known as Bon Papa, had been Haiti’s military dictator since 1950. The first few years of his reign had gone relatively well; his stridently anticommunist posturing won him some measure of support from the United States, and Haiti briefly even became a vacation destination to rival the Dominican Republic among sun-seeking American tourists. But when a devastating hurricane struck Hispaniola in 1954 and millions of dollars in international aid disappeared in inimitable Haitian fashion without ever reaching the country’s people, the mood among the elites inside the country who had been left out of that feeding frenzy began to turn against Bon Papa. On December 12, 1956, he resigned his office by the hasty expedient of jumping into an airplane and getting the hell out of Dodge before he came to share the fate of Jean Vilbrun Guillaume Sam. The office of the presidency, a hot potato if ever there was one, then passed through three more pairs of hands in the next six months, while an election campaign to determine Haiti’s next permanent leader took place.

Of course, in Haiti election campaigns were fought with fists, clubs, knives, guns, bombs, and, most of all, rampant, pervasive corruption at every level. Still, in a rare sign of progress of a sort in Haitian politics, the two strongest candidates were both noiristes promising to empower the people rather than the mulatto elites. They were Daniel Fignolé and François Duvalier, the latter being a frequent comrade-in-arms of the former during the struggles of the last twenty years who had now become a rival; he was an unusually quiet, even diffident-seeming personality in terms of typical Haitian politics, so much so that many doubted his mental fortitude and intelligence alike. But Duvalier commanded enormous loyalty in the countryside, where he had worked for years as a doctor, often in tandem with American charitable organizations. Meanwhile Fignolé’s urban workers remained as committed to him as ever, and clashes between the supporters of the two former friends were frequent and often violent.

The workers around Port-au-Prince pledged absolute allegiance to Daniel Fignolé. He liked to call them his wuolo konmpresé — his “steamrollers,” always ready to take to the streets for a rally, a demonstration, or just a good old fight.

But then, on May 25, 1957, Duvalier unexpectedly threw his support behind a bid to make his rival the latest provisional president while the election ran its course, and Fignolé marched into the presidential palace surrounded by his cheering supporters. In a stirring speech on the palace steps, he promised a Haitian “New Deal” in the mold of Franklin D. Roosevelt’s American version.

The internal machinations of Haitian politics are almost impossible for an outsider to understand, but many insiders have since claimed that Duvalier, working in partnership with allies he had quietly made inside the military, had set Fignolé up for a fall, contriving to remove him from the business of day-to-day campaigning and thereby shore up his own support while making sure his presidency was always doomed to be a short one even by Haitian standards. At any rate, on the night of June 14, 1957 — just nineteen days after he had assumed the post — a group of army officers burst intoFignolé’s office, forced him to sign a resignation letter at gunpoint, and then tossed him into an airplane bound for the United States, exiling him on pain of death should he ever return to Haiti.

The deposing of Fignolé ignited another spasm of civil unrest among his supporters in Port-au-Prince, but their violence was met with even more violence by the military. There were reports of soldiers firing machine guns into the crowds of demonstrators. People were killed in the hundreds if not thousands in the capital, even as known agitators were rounded up en masse and thrown into prison, the offices of newspapers and magazines supporting Fignolé’s cause closed and ransacked. On September 22, 1957, it was announced that François Duvalier had been elected president by the people of Haiti.

Inside the American government, opinion was divided about the latest developments in Haiti. The CIA was convinced that, despite Fignolé’s worrisome leftward orientation, his promised socialist democracy was a better, more stable choice for the United States’s close neighbor than a military junta commanded by Duvalier. The agency thus concocted a scheme to topple Duvalier’s new government, which was to begin with the assassination of his foreign minister, Louis Raimone, on an upcoming visit to Mexico City to negotiate an arms deal. But the CIA’s plans accidentally fell into the hands of one Austin Garriot, an academic doing research for his latest book in Washington, D.C. Garriot passed the plans on to J. Edgar Hoover’s FBI, who protested strongly that any attempt to overthrow Duvalier would be counter to international law — and who emphasized as well that he had declared himself to be strongly pro-American and anti-Soviet. With the top ranks of the FBI threatening to expose the illegal assassination plot to other parts of the government if the scheme was continued, the CIA had no choice but to quietly abandon it. Duvalier remained in power, unmolested.

He had promised his supporters a bright future…

…before a shining white city atop a hill. A sign welcomes you to Duvalierville. As you walk through the busy streets, well-dressed, cheerful people greet you as they pass by. You are struck by the abundance of goods and services offered, and the cleanliness and order that prevails. Almost every wall is adorned with a huge poster of a frail, gray-haired black man wearing a dark suit and horn-rimmed glasses.

Under the figure are the words: “Je suis le drapeau Haitien, Uni et Indivisible. François Duvalier.”

Everyone you ask about the man says the same thing: “We all love Papa Doc. He’s our president for life now, and we pray that he will live forever.”

Instead the leader who became known as Papa Doc — this quiet country doctor — became another case study in the banality of evil. During his fourteen years in power, an estimated 60,000 people were executed upon his personal extra-judicial decree. The mulatto elite, who constituted the last remnants of Haiti’s educated class and thus could be a dangerous threat to his rule, were a particular target; purge after purge cut a bloody swath through their ranks. When Papa Doc died in 1971, his son Jean-Claude Duvalier — Baby Doc — took over for another fifteen years. The world became familiar with the term “Haitian boat people” as the Duvaliers’ desperate victims took to the sea in the most inadequate of crafts. For them, any shred of hope for a better life was worth grasping at, no matter what the risk.

…you find yourself at sea, in a ragged little boat. Every inch of space is crowded with humanity. They’re people you know and care about deeply. You have no food or water, but you have something more precious — hope. In your native Haiti, your life has become intolerable. The poverty, the fear, the sudden disappearances of so many people — all have driven you to undertake this desperate journey into the unknown.

A storm arises, and your small boat is battered by the waves and torn apart. One by one, your friends, your brothers, your children slip beneath the roiling water and are lost. You cling to a rotten board as long as you can, but you know that your dream of freedom is gone. “Damn you, Duvalier,” you scream as the water closes over your head…


And now I have to make a confession: not quite all of the story I’ve just told you is true. That part about the CIA deciding to intervene in Haitian politics, only to be foiled by the FBI? It never happened (as far as I know, anyway). That part, along with all of the quoted text above, is rather lifted from a fascinating and chronically underappreciated work of interactive fiction from 1992: Shades of Gray.

Shades of Gray was the product of a form of collaboration which would become commonplace in later years, but which was still unusual enough in 1992 that it was remarked in virtually every mention of the game: the seven people who came together to write it had never met one another in person, only online. The project began when a CompuServe member named Judith Pintar, who had just won the 1991 AGT Competition with her CompuServe send-up Cosmoserve, put out a call for collaborators to make a game for the next iteration of the Competition. Mark Baker, Steve Bauman, Belisana, Hercules, Mike Laskey, and Cindy Yans wound up joining her, each writing a vignette for the game. Pintar then wrote a central spine to bind all these pieces together. The end result was so much more ambitious than anything else made for that year’s AGT Competition that organizer David Malmberg created a “special group effort” category just for it — which, it being the only game in said category, it naturally won.

Yet Shades of Gray‘s unusual ambition wasn’t confined to its size or number of coauthors. It’s also a game with some serious thematic heft.

The idea of using interactive fiction to make a serious literary statement was rather in abeyance in the early 1990s. Infocom had always placed a premium on good writing, and had veered at least a couple of times into thought-provoking social and historical commentary with A Mind Forever Voyaging and Trinity. But neither of those games had been huge sellers, and Infocom’s options had always been limited by the need to please a commercial audience who mostly just wanted more fun games like Zork from them, not deathless literary art. Following Infocom’s collapse, amateur creators working with development systems like AGT and TADS likewise confined almost all of their efforts to making games in the mold of Zork — unabashedly gamey games, with lots of puzzles to solve and an all-important score to accumulate.

On the surface, Shades of Gray may not seem a radical departure from that tradition; it too sports lots of puzzles and a score. Scratch below the surface, though, and you’ll find a text adventure with more weighty thoughts on its mind than any since 1986’s Trinity (a masterpiece of a game which, come to think of it, also has puzzles and a score, thus proving these elements are hardly incompatible with literary heft).

It took the group who made Shades of Gray much discussion to arrive at its central theme, which Judith Pintar describes as one of “moral ambiguity”: “We wanted to show that life and politics are nuanced.” You are cast in the role of Austin Garriot, a man whose soul has become unmoored from his material being for reasons that aren’t ever — and don’t really need to be — clearly explained. With the aid of a gypsy fortune teller and her Tarot deck, you explore the impulses and experiences that have made you who you are, presented in the form of interactive vignettes carved from the stuff of symbolism and memory and history. Moral ambiguity does indeed predominate through echoes of the ancient Athens of Antigone, the Spain of the Inquisition, the United States of the Civil War and the Joseph McCarthy era. In the most obvious attempt to present contrasting viewpoints, you visit Sherwood Forest twice, playing once as Robin Hood and once as the poor, put-upon Sheriff of Nottingham, who’s just trying to maintain the tax base and instill some law and order.

> examine chest
The chest is solidly made, carved from oak and bound together with strips
of iron. It contains the villagers' taxes -- money they paid so you could
defend them against the ruffians who inhabit the woods. Unfortunately, the
outlaws regularly attack the troops who bring the money to Nottingham, and
generally steal it all.

Because you can no longer pay your men-at-arms, no one but you remains to protect the local villagers. The gang is taking full advantage of this, attacking whole communities from their refuge in Sherwood Forest. You are alone, but you still have a duty to perform.

Especially in light of the contrasting Robin Hood vignettes, it would be all too easy for a reviewer like me to neatly summarize the message of Shades of Gray as something like “there are two sides to every story” or “walk a mile in my shoes before you condemn me.” And, to be sure, that message is needed more than ever today, not least by the more dogmatic members of our various political classes. Yet to claim that that’s all there is to Shades of Gray is, I think, to do it a disservice. Judith Pintar, we should remember, described its central theme as moral ambiguity, which is a more complex formulation than just a generalized plea for empathy. There are no easy answers in Shades of Gray — no answers at all really. It tells us that life is complicated, and moral right is not always as easy to determine as we might wish.

Certainly that statement applies to the longstanding question with which I opened this article: What to do about Haiti? In the end, it’s the history of that long-suffering country that comes to occupy center stage in Shades of Gray‘s exploration of… well, shades of gray.

Haiti’s presence in the game is thanks to the contributor whose online handle was Belisana. [1]I do know her real name, but don’t believe it has ever been published in connection with Shades of Gray, and therefore don’t feel comfortable “outing” her here. It’s an intriguingly esoteric choice of subject matter for a game written in this one’s time and place; of the contributors, only Mark Baker had any sort of personal connection to Haiti, having worked there for several months back in 1980. Belisana began her voyage into Haitian history with a newspaper clipping, chanced upon in a library, from that chaotic year of 1957. She included a lightly fictionalized version of it in the game itself:

U.S. AID TO HAITI REDUCED TWO-THIRDS

PORT-AU-PRINCE, Haiti, Oct. 8 — The United States government today shut down two-thirds of its economic aid to Haiti. The United States Embassy sources stressed that the action was not in reprisal against the reported fatal beating of a United States citizen last Sunday.

The death of Shibley Matalas was attributed by Col. Louis Raimone, Haitian Foreign Minister, to a heart attack. Three U.S. representatives viewed Mr. Matalas’ body. Embassy sources said they saw extensive bruises, sufficient to be fatal.

Through my own archival research, I’ve determined that in the game Belisana displaced the date of the actual incident by one week, from October 1 to October 8, and that she altered the names of the principals: Shibley Matalas was actually named Shibley Talamas, and Louis Raimone was Louis Roumain. The incident in question occurred after François Duvalier had been elected president of Haiti but three weeks before he officially assumed the office. The real wire report, as printed in theLong Beach Press Telegram, tells a story too classically Haitian not to share in full.

Yank in Haitian Jail Dies, U.S. Envoy Protests

Port-au-Prince, Haiti (AP) — Americans were warned to move cautiously in Haiti today after Ambassador Gerald Drew strongly protested the death of a U.S. citizen apparently beaten while under arrest. The death of Shibley Talamas, 30-year-old manager of a textile factory here, brought the United States into the turmoil which followed the presidential election Sept. 22 in the Caribbean Negro republic.

Drew protested Monday to Col. Louis Roumain, foreign minister of the ruling military junta. The ambassador later cautioned Americans to be careful and abide by the nation’s curfew.

Roumain had gone to the U.S. Embassy to present the government’s explanation of Talamas’ death, which occurred within eight hours of his arrest.

The ambassador said Roumain told him Talamas, son of U.S. citizens of Syrian extraction, was arrested early Sunday afternoon in connection with the shooting of four Haitian soldiers. The solders were killed by an armed band Sunday at Kenscoff, a mountain village 14 miles from this capital city.

Drew said Roumain “assured me that Talamas was not mistreated.”

While being questioned by police, Talamas tried to attack an officer and to reach a nearby machine gun, Roumain told Drew. He added that Talamas then was handcuffed and immediately died of a heart attack.

The embassy said three reliable sources reported Talamas was beaten sufficiently to kill him.

One of these sources said Talamas’ body bore severe bruises about the legs, chest, shoulders, and abdomen, and long incisions that might have been made in an autopsy.

A Haitian autopsy was said to have confirmed that Talamas died of a heart attack. The location of the body remained a mystery. It was not delivered immediately to relatives.

Talamas, 300-pound son of Mr. and Mrs. Antoine Talamas, first was detained in the suburb of Petionville. Released on his promise to report later to police, he surrendered to police at 2 p.m. Sunday in the presence of two U.S. vice-consuls. His wife, Frances Wilpula Talamas, formerly of Ashtabula, Ohio, gave birth to a child Sunday.

Police said they found a pistol and shotgun in Talamas’ business office. Friends said he had had them for years.

Before seeing Roumain Monday, Drew tried to protest to Brig. Gen. Antonio Kébreau, head of the military junta, but failed in the attempt. An aid told newsmen that Kébreau could not see them because he had a “tremendous headache.”

Drew issued a special advisory to personnel of the embassy and U.S. agencies and to about 400 other Americans in Haiti. He warned them to stay off the streets during the curfew — 8 p.m. to 4 a.m. — except for emergencies and official business.

Troops and police have blockaded roads and sometimes prevented Americans getting to and from their homes. Americans went to their homes long ahead of the curfew hour Monday night. Some expressed fear that Talamas’ death might touch off other incidents.

Calm generally prevailed in the country. Police continued to search for losing presidential candidate Louis Déjoie, missing since the election. His supporters have threatened violence and charged that the military junta rigged the election for Dr. François Duvalier, a landslide winner in unofficial returns.

Official election results will be announced next Tuesday. Duvalier is expected to assume the presidency Oct. 14.

The Onion, had it existed at the time, couldn’t have done a better job of satirizing the farcical spectacle of a Haitian election. And yet all this appeared in a legitimate news report, from the losing candidate who mysteriously disappeared to the prisoner who supposedly dropped dead of a heart attack as soon as his guards put the handcuffs on him — not to mention the supreme leader with a headache, which might just be my favorite detail of all. Again: what does one do with a place like this, a place so corrupt for so long that corruption has become inseparable from its national culture?

But Shades of Gray is merciless. In the penultimate turn, it demands that you answer that question — at least this one time, in a very specific circ*mstance. Still playing the role of the hapless academic Austin Garriot, you’ve found a briefcase with all the details of the CIA’s plot to kill the Haitian foreign minister and initiate a top-secret policy of regime change in the country. The CIA’s contracted assassin, the man who lost the briefcase in the first place, is a cold fish named Charles Calthrop. He’s working together with Michael Matalas, vengeance-seeking brother of the recently deceased Shibley Matalas (né Talamas), and David Thomas, the CIA’s bureau chief in Haiti; they all want you to return the briefcase to them and forget that you ever knew anything about it. But two FBI agents, named Smith and Wesson (ha, ha…), have gotten wind of the briefcase’s contents, and want you to give it to them instead so they can stop the conspiracy in its tracks.

So, you are indeed free to take the course of action I’ve already described: give the briefcase to the FBI, and thereby foil the plot and strike a blow for international law. This will cause the bloody late-twentieth-century history of Haiti that we know from our own timeline to play out unaltered, as Papa Doc consolidates his grip on the country unmolested by foreign interventions.

Evil in a bow tie: François Duvalier at the time of the 1957 election campaign. Who would have guessed that this unassuming character would become the worst single Haitian monster of the twentieth century?

Or you can choose not to turn over the briefcase, to let the CIA’s plot take its course. And what happens then? Well, this is how the game describes it…

Smith and Wesson were unable to provide any proof of the CIA’s involvement in Raimone’s killing, and they were censured by Hoover for the accusation.

The following Saturday, Colonel Louis Raimone died from a single rifle shot through the head as he disembarked from a plane in Mexico City. His assassin was never caught, nor was any foreign government ever implicated.

It was estimated that the shot that killed Raimone was fired from a distance of 450 yards, from a Lee Enfield .303 rifle. Very few professionals were capable of that accuracy over that distance; Charles Calthrop was one of the few, and the Lee Enfield was his preferred weapon.

Duvalier didn’t survive long as president. Without the riot equipment that Raimone had been sent to buy, he was unable to put down the waves of unrest that swept the country. The army switched its allegiance to the people, and he was overthrown in March 1958.

Duvalier lived out the rest of his life in exile in Paris, and died in 1964.

Daniel Fignolé returned to govern Haiti after Duvalier was ousted, and introduced an American-style democracy. He served three 5-year terms of office, and was one of Kennedy’s staunchest allies during the Cuban missile crisis. He is still alive today, an elder statesman of Caribbean politics.

His brother’s death having been avenged, Michel Matalas returned to his former job as a stockman in Philadelphia. He joined the army and died in Vietnam in 1968. His nephew, Shibley’s son Mattieu, still lives in Haiti.

David Thomas returned to Haiti in his role as vice-consul, and became head of the CIA’s Caribbean division. He provided much of the intelligence that allowed Kennedy to bluff the Russians during the Cuban missile crisis before returning to take up a senior post at Langley.

What we have here, then, is a question of ends versus means. In the universe of Shades of Gray, at least, carrying out an illegal assassination and interfering in another sovereign country’s domestic politics leads to a better outcome than the more straightforwardly ethical course of abiding by international law.

Ever since it exited World War II as the most powerful country in the world, the United States has been confronted with similar choices time and time again. It’s for this reason that Judith Pintar calls her and her colleagues’ game “a story about American history as much as it is about Haiti.” While its interference in Haiti on this particular occasion does appear to have been limited or nonexistent in our own timeline, we know that the CIA has a long history behind it of operations just like the one described in the game, most of which didn’t work out nearly so well for the countries affected. And we also know that such operations were carried out by people who really, truly believed that their ends did justify their means. What can we do with all of these contradictory facts? Shades of gray indeed.

Of course, Shades of Gray is a thought experiment, not a serious study in geopolitical outcomes. There’s very good reason to question whether the CIA, who saw Daniel Fignolé as a dangerously left-wing leader, would ever have allowed him to assume power once again; having already chosen to interfere in Haitian politics once, a second effort to keep Fignolé out of power would only have been that much easier to justify. (This, one might say, is the slippery slope of interventionism in general.) Even had he regained and subsequently maintained his grip on the presidency, there’s reason to question whether Fignolé would really have become the mechanism by which true democracy finally came to Haiti. The list of Haitian leaders who once seemed similarly promising, only to disappoint horribly, is long; it includes on it that arguably greatest Haitian monster of all, the mild-mannered country doctor named François Duvalier, alongside such more recent disappointments as Jean-Bertrand Aristide. Perhaps Haiti’s political problems really are cultural problems, and as such are not amenable to fixing by any one person. Or, as many a stymied would-be reformer has speculated over the years, perhaps there really is just something in the water down there, or a voodoo curse in effect, or… something.

So, Shades of Gray probably won’t help us solve the puzzle of Haiti. It does, however, provide rich food for thought on politics and ethics, on the currents of history and the winds of fate — and it’s a pretty good little text adventure too. Its greatest weakness is the AGT development system that was used to create it, whose flexibility is limited and whose parser leaves much to be desired. “Given a better parser and the removal of some of the more annoying puzzles,” writes veteran interactive-fiction reviewer Carl Muckenhoupt, “this one would easily rate five stars.” I don’t actually find the puzzles all that annoying, but do agree that the game requires a motivated player willing to forgive and sometimes to work around the flaws of its engine. Any player willing to do so, though, will be richly rewarded by this milestone in interactive-fiction history, the most important game in terms of the artistic evolution of the medium to appear between Infocom’s last great burst of formal experiments in 1987 and the appearance of Graham Nelson’s milestone game Curses! in 1993. Few games in all the years of text-adventure history have offered more food for thought than Shades of Gray — a game that refuses to provide incontrovertible answers to the questions it asks, and is all the better for it.

In today’s Haiti, meanwhile, governments change constantly, but nothing ever changes. The most recent election as of this writing saw major, unexplained discrepancies between journalists’ exit polling and the official results, accompanied by the usual spasms of violence in the streets. Devastating earthquakes and hurricanes in recent years have only added to the impression that Haiti labors under some unique curse. On the bright side, however, it has been nearly a decade and a half since the last coup d’etat, which is pretty good by Haitian standards. You’ve got to start somewhere, right?

(Sources: the books Red & Black in Haiti: Radicalism, Conflict, and Political Change 1934-1957, Haiti: The Tumultuous History — From Pearl of the Caribbean to Broken Nation by Philippe Girard, and Haiti: The Aftershocks of History by Laurent Dubois; Life of June 3 1957; Long Beach Press Telegram of October 1 1957. My huge thanks go to Judith Pintar for indulging me with a long conversation about Shades of Gray and other topics. You can read more of our talk elsewhere on this site.

You can download Shades of Gray from the IF Archive. You can play it using the included original interpreter through DOSBox, or, more conveniently, with a modern AGT interpreter such as AGiliTY or — best of all in my opinion — the multi-format Gargoyle.)

Footnotes

Footnotes
1 I do know her real name, but don’t believe it has ever been published in connection with Shades of Gray, and therefore don’t feel comfortable “outing” her here.

26 Comments

Posted by Jimmy Maher on September 14, 2018 in Digital Antiquaria, Interactive Fiction

Tags: agt, compuserve, pintar, shades of gray

Agrippa (A Book of the Dead)

07Sep

Is it the actor or the drama
Playing to the gallery?
Or is it but the character
Of any single member of the audience
That forms the plot
of each and every play?

“Hanging in the Gallery” by Dave Cousins

I was introduced to the contrast between art as artifact and art as experience by an episode of Northern Exposure, a television show which meant a great deal to my younger self. In “Burning Down the House,” Chris in the Morning, the town of Cicely, Alaska’s deejay, has decided to fling a living cow through the air using a trebuchet. Why? To create a “pure moment.”

“I didn’t know what you are doing was art,” says Shelley, the town’s good-hearted bimbo. “I thought it had to be in a frame, or like Jesus and Mary and the saints in church.”

“You know, Shell,” answers Chris in his insufferable hipster way, “the human soul chooses to express itself in a profound profusion of ways, not just the plastic arts.”

“Plastic hearts?”

“Arts! Plastic arts! Like sculpture, painting, charcoal. Then there’s music and poetry and dance. Lots of people, Susan Sontag notwithstanding, include photography.”

“Slam dancing?”

“Insofar as it reflects the slam dancer’s inner conflict with society through the beat… yeah, sure, why not? You see, Shelley, what I’m dealing with is the aesthetics of the transitory. I’m creating tomorrow’s memories, and, as memories, my images are as immortal as art which is concrete.”

Certain established art forms — those we generally refer to as the performing arts — have this quality baked into them in an obvious way. Keith Richards of the Rolling Stones once made the seemingly arrogant pronouncement that his band was “the greatest rock-and-roll band in the world” — but later modified his statement by noting that “on any given night, it’s a different band that’s the greatest rock-and-roll band in the world.” It might be the Rolling Stones playing before an arena full of 20,000 fans one night, and a few sweaty teenagers playing for a cellar full of twelve the next. It has nothing to do with the technical skill of the musicians; music is not a skills competition. A band rather becomes the greatest rock-and-roll band in the world the moment when the music goes someplace that transcends notes and measures.This is what the ancient Greeks called the kairos moment: the moment when past and future and thought itself fall away and there are just the band, the audience, and the music.

But what of what Chris in the Morning calls the “plastic arts,” those oriented toward producing some physical (or at least digital) artifact that will remain in the world long after the artist has died? At first glance, the kairos moment might seem to have little relevance here. Look again, though. Art must always be an experience, in the sense that there is a viewer, a reader, or a player who must experience it. And the meaning it takes on for that person — or lack thereof — will always be profoundly colored by where she was, who she was, when she was at the time. You can, in other words, find your own transitory transcendence inside the pages of a book just as easily as you can in a concert hall.

The problem with the plastic arts is that it’s too easy to destroy the fragile beauty of that initial impression. It’s too easy to return to the text trying to recapture the transcendent moment, too easy to analyze it and obsess over it and thereby to trample it into oblivion.

But what if we could jettison the plastic permanence from one of the plastic arts, creating something that must live or die — like a rock band in full flight or Chris in the Morning’s flying cow — only as a transitory transcendence? What if we could write a poem which the reader couldn’t return to and fuss over and pin down like a butterfly in a display case? What if we could write a poem that the reader could literally only read one time, that would flow over her once and leave behind… what? As it happens, an unlikely trio of collaborators tried to do just that in 1992.


Very early that year, a rather strange project prospectus made the rounds of the publishing world. Its source was Kevin Begos, Jr., who was known, to whatever extent he was known at all, as a publisher of limited-edition art books for the New York City gallery set. This new project, however, was something else entirely, and not just because it involved the bestselling science-fiction author William Gibson, who was already ascending to a position in the mainstream literary pantheon as “the prophet of cyberspace.”

Kevin Begos Jr., publisher of museum-quality, limited edition books, has brought together artist Dennis Ashbaugh (known for his large paintings of computer viruses and his DNA “portraits”) and writer William Gibson (who coined the term cyberspace, then explored the concept in his award-winning books Neuromancer, Count Zero, and Mona Lisa Overdrive) to produce a collaborative Artist’s Book.

In an age of artificial intelligence, recombinant genetics, and radical, technologically-driven cultural change, this “Book” will be as much a challenge as a possession, as much an enigma as a “story”.

The Text, encrypted on a computer disc along with a Virus Program written especially for the project, will mutate and destroy itself in the course of a single “reading”. The Collector/Reader may either choose to access the Text, thus setting in motion a process in which the Text becomes merely a Memory, or preserve the Text unread, in its “pure” state — an artifact existing exclusively in cyberspace.

Ashbaugh’s etchings, which allude to the potent allure and taboo of Genetic Manipulation, are both counterpoint and companion-piece to the Text. Printed on beautiful rag paper, their texture, odor, form, weight, and color are qualities unavailable to the Text in cyberspace. (The etchings themselves will undergo certain irreparable changes following their initial viewing.)

This Artist’s Book (which is not exactly a “book” at all) is cased in a wrought metal box, the Mechanism, which in itself becomes a crucial, integral element of the Text. This book-as-object raises unique questions about Art, Time, Memory, Possession—and the Politics of Information Control. It will be the first Digital Myth.

William Gibson had been friends with Dennis Ashbaugh for some time, ever since the latter had written him an admiring letter a few years after his landmark novel Neuromancer was published. The two men worked in different mediums, but they shared an interest in the transformations that digital technology and computer networking were having on society. They corresponded regularly, although they met only once in person.

Yet it was neither Gibson the literary nor Ashbaugh the visual artist who conceived their joint project’s central conceit; it was instead none other than the author of the prospectus above, publisher Kevin Begos, Jr., another friend of Ashbaugh. Ashbaugh, who like Begos was based in New York City, had been looking for a way to collaborate with Gibson, and came to his publisher friend looking for ideas that might be compelling enough to interest such a high-profile science-fiction writer, who lived all the way over in Vancouver, Canada, just about as far away as it was possible to get from New York City and still be in North America. “The idea kind of came out of the blue,” says Begos: “to do a book on a computer disk that destroys itself after you read it.” Gibson, Begos, thought, would be the perfect writer to which to pitch such a project, for he innately understood the kairos moment in art; his writing was thoroughly informed by the underground rhythms of the punk and new-wave music scenes. And, being an acknowledged fan of experimental literature like that written by his hero William S. Burroughs, he wasn’t any stranger to conceptual literary art of the sort which this idea of a self-destroying text constituted.

Even so, Begos says that it took him and Ashbaugh a good six to nine months to convince Gibson to join the project. Even after agreeing to participate, Gibson proved to be the most passive of the trio by far, providing the poem that was to destroy itself early on but then doing essentially nothing else after that. It’s thus ironic and perhaps a little unfair that the finished piece remains today associated almost exclusively with the name of William Gibson. If one person can be said to be the mastermind of the project as a whole, that person must be Kevin Begos, Jr., not William Gibson.

Begos, Ashbaugh, and Gibson decided to call their art project Agrippa (A Book of the Dead), adopting the name Gibson gave to his poem for the project as a whole. Still, there was, as the prospectus above describes, much more to it than the single self-immolating disk which contained the poem. We can think of the whole artwork as being split into two parts: a physical component, provided by Ashbaugh, and a digital component, provided by Gibson, with Begos left to tie them together. Both components were intended to be transitory in their own ways. (Their transcendence, of course, must be in the eye of the beholder.)

Begos said that he would make and sell just 455 copies of the complete work, ranging in price from $450 for the basic edition to $7500 for a “deluxe copy in a bronze case.” The name of William Gibson lent what would otherwise have been just a wacky avant-garde art project a great deal of credibility with the mainstream press. It was discussed far and wide in the spring and summer of 1992, finding its way into publications like People, Entertainment Weekly,Esquire, andUSA Today long before it existed as anything but a set of ideas inside the minds of its creators. A reporter for Details magazine repeated the description of a Platonic ideal ofAgrippa that Begos relayed to him from his fond imagination:

‘Agrippa’ comes in a rough-hewn black box adorned with a blinking green light and an LCD readout that flickers with an endless stream of decoded DNA. The top opens like a laptop computer, revealing a hologram of a circuit board. Inside is a battered volume, the pages of which are antique rag-paper, bound and singed by hand.

Like a frame of unprocessed film, ‘Agrippa’ begins to mutate the minute it hits the light. Ashbaugh has printed etchings of DNA nucleotides, but then covered them with two separate sets of drawings: One, in ultraviolet ink, disappears when exposed to light for an hour; the other, in infrared ink, only becomes visible after an hour in the light. A paper cavity in the center of the book hides the diskette that contains Gibson’s fiction, digitally encoded for the Macintosh or the IBM.

[…]

The disk contained Gibson’s poem Agrippa: “The story scrolls on the screen at a preset pace. There is no way to slow it down, speed it up, copy it, or remove the encryption that ultimately causes it to disappear.” Once the text scrolled away, the disk got wiped, and that was that. All that would be left of Agrippa was the reader’s memory of it.

The three tricksters delighted over the many paradoxes of their self-destroying creation with punk-rock glee. Ashbaugh laughed about having to send two copies of it to the copyright office — because to register it for a copyright, you had to read it, but when you read it you destroyed it. Gibson imagined some musty academic of the future trying to pry the last copy out of the hands of a collector so he could read it — and thereby destroy it definitively for posterity. He described it as “a cruel joke on book collectors.”

As I’ve already noted, Ashbaugh’s physical side of the Agrippa project was destined to be overshadowed by Gibson’s digital side, to the extent that the former is barely remembered at all today. Part of the problem was the realities of working with physical materials, which conspired to undo much of the original vision for the physical book. The LCD readout and the circuit-board hologram fell by the wayside, as did Ashbaugh’s materializing and de-materializing pictures. (One collector has claimed that the illustrations “fade a bit” over time, but one does have to wonder whether even that is wishful thinking.)

But the biggest reason that one aspect ofAgrippa so completely overshadowed the other was ironically the very thing that got the project noticed at all in so many mainstream publications: William Gibson’s fame in comparison to his unknown collaborators. People magazine didn’t even bother to mention that there was anything to Agrippa at all beyond the disk; “I know Ashbaugh was offended by that,” says Begos. Unfortunately obscured by this selective reporting was an intended juxtaposition of old and new forms of print, a commentary on evolving methods of information transmission. Begos was as old-school as publishers got, working with a manual printing press not very dissimilar from the one invented by Gutenberg; each physical edition of Agrippa was a handmade objet d’art. Yet all most people cared about was the little disk hidden inside it.

So, even as the media buzzed with talk about the idea of a digital poem that could only be read once, Begos had a hell of a time selling actual, physical copies of the book. As of December of 1992, a few months after it went to press, Begos said he still had about 350 copies of it sitting around waiting for buyers. It seems unlikely that most of these were ever sold; they were quite likely destroyed in the end, simply because the demand wasn’t there. Begos relates a typical anecdote:

There was a writer from a newspaper in the New York area who was writing something on Agrippa. He was based out on Long Island and I was based in Manhattan. He sent a photographer to photograph the book one afternoon. And he’d done a phone interview with me, though I don’t remember if he called Gibson or not. He checked in with me after the photographer had come to make sure that it had gone alright, and I said yes. I said, “Well aren’t you coming by; don’t you want to see the book?” He said “No; you know, the traffic’s really bad; you know, I just don’t have time.” He published his story the next day, and there was nothing wrong with it, but I found that very odd. It probably would have taken him an hour to drive in, or he could have waited a few days. But some people, they almost seemed resistant to seeing the whole package.

It’s inevitable, given the focus of this site, that our interest too will largely be captured by the digital aspect of the work. Yet the physical artwork — especially the full-fledged $7500 edition — certainly is an interesting creation in its own right. Rather than looking sleek and modern, as one might expect from the package framing a digital text from the prophet of cyberpunk, it looks old — mysteriously, eerily old. “There’s a little bit of a dark side to the Gibson story and the whole mystery about it and the whole notion of a book that destroys itself, a text that destroys itself after you read it,” notes Begos. “So I thought that was fitting.” It smacks of ancient tomes full of forbidden knowledge, like H.P. Lovecraft’s Necronomicon, or the Egyptian Book of the Dead to which its parenthetical title seems to pay homage. Inside was to be found abstract imagery and, in lieu of conventional text, long strings of numbers and characters representing the gene sequence of the fruit fly. And then of course there was the disk, nestled into its little pocket at the back.

The deluxe edition of Agrippa is housed in this box, made out of fiberglass and paper and “distressed” by hand.

The book is inside a shroud and another case. Its title has been burned into it by hand.

The book’s 64 hand-cut pages combine long chunks of the fruit-fly genome alongside Daniel Ashbaugh’s images evocative of genetics — and occasional images, such as the pistol above, drawn from Gibson’s poem of “Agrippa.”

The last 20 pages have been glued together — as usual, by hand — and a pocket cut out of them to hold the disk.

But it was, as noted, the contents of the disk that really captured the public’s imagination, and that’s where we’ll turn our attention now.

William Gibson’s contribution to the project is an autobiographical poem of approximately 300 lines and 2000 words. The poem called “Agrippa” is named after something far more commonplace than its foreboding packaging might imply. “Agrippa” was actually the brand name of a type of photo album which was sold by Kodak in the early- and mid-twentieth century. Gibson’s poem begins as he has apparently just discovered such an artifact — “a Kodak album of time-burned black construction paper” — in some old attic or junk room. What follows is a meditation on family and memory, on the roots of things that made William Gibson the man he is now. There’s a snapshot of his grandfather’s Appalachian sawmill; there’s a pistol from some semi-forgotten war; there’s a picture of downtown Wheeling, West Virginia, 1917; there’s a magazine advertisem*nt for a Rocket 88; there’s the all-night bus station in Wytheville, Virginia, where a young William Gibson used to go to buy cigarettes for his mother, and from which a slightly older one left for Canada to avoid the Vietnam draft and take up the life of an itinerant hippie.

Gibson is a fine writer, and “Agrippa” is a lovely, elegiac piece of work which stands on its own just fine as plain old text on the page when it’s divorced from all of its elaborate packaging and the work of conceptual art that was its original means of transmission. (Really, it does: go read it.) It was also the least science-fictional thing he had written to date — quite an irony in light of all of the discussion that swirled around it about publication in the age of cyberspace. But then, the ironies truly pile up in layers when it comes to this artistic project. It was ironically appropriate that William Gibson, a famously private person, should write something so deeply personal only in the form of a poem designed to disappear as soon as it had been read. And perhaps the supreme irony was this disappearing poem’s interest in the memories encoded by permanent artifacts like an old photo album, an old camera, or an old pistol. This interest in the way that everyday objects come to embody our collective memory would go on to become a recurring theme in Gibson’s later, more mature, less overtly cyberpunky novels. See, for example, the collector of early Sinclair microcomputers who plays a prominent role in 2003’s Pattern Recognition, in my opinion Gibson’s best single novel to date.

But of course it wasn’t as if the public’s interest in Agrippa was grounded in literary appreciation of Gibson’s poem, any more than it was in artistic appreciation of the physical artwork that surrounded it. All of that was rather beside the point of the mainstream narrative — and thus we still haven’t really engaged with the reason that Agrippa was getting write-ups in the likes of People magazine. Beyond the star value lent the project by William Gibson, all of the interest in Agrippa was spawned by this idea of a text — it could been have any text packaged in any old way, if we’re being brutally honest — that consumed itself as it was being read. This aspect of it seemed to have a deep resonance with things that were currently happening in society writ large, even if few could clarify precisely what those things were in a world perched on the precipice of the Internet Age. And,for all that the poem itself belied his reputation as a writer of science fiction, this aspect of Agrippa also resonated with the previous work of William Gibson, the mainstream media’s go-to spokesman for the (post)modern condition.

Enter, then, the fourth important contributor to Agrippa, a shadowy character who has chosen to remain anonymous to this day and whom we shall therefore call simply the Hacker. He apparently worked at Bolt, Beranek, and Newman, a Boston consulting firm with a rich hacking heritage (Will Crowther of Adventure fame had worked there), and was a friend of Dennis Ashbaugh. Kevin Begos, Jr., contracted with him to write the code for Gibson’s magical disappearing poem. “Dealing with the hacker who did the program has been like dealing with a character from one of your books,” wrote Begos to Gibson in a letter.

The Hacker spent most of his time not coding the actual display of the text — a trivial exercise — but rather devising an encryption scheme to make it impenetrable to the inevitable army of hex-editor-wielding compatriots who would try to extract the text from the code surrounding it. “The encryption,” he wrote to Begos, “has a very interesting feature in that it is context-sensitive. The value, both character and numerical, of any given character is determined by the characters next to it, which from a crypto-analysis or code-breaking point of view is an utter nightmare.”

The Hacker also had to devise a protection scheme to prevent people from simply copying the disk, then running the program from the copy. He tried to add digitized images of some of Ashbaugh’s art to the display, which would have had a welcome unifying effect on an artistic statement that too often seemed to reflect the individual preoccupations of Begos, Ashbaugh, and Gibson rather than a coherent single vision. In the end, however, he gave that scheme up as technically unfeasible. Instead he settled for a few digitized sound effects and a single image of a Kodak Agrippa photo album, displayed as the title screen before the text of the poem began to scroll. Below you can see what he ended up creating, exactly as someone would have who was foolhardy enough to put the disk into her Macintosh back in 1992.


The denizens of cyberspace, many of whom regarded William Gibson more as a god than a prophet, were naturally intrigued by Agrippa from the start, not least thanks to the implicit challenge it presented to crack the protection and thus turn this artistic monument to impermanence into its opposite. The Hacker sent Begos samples of the debates raging on the pre-World Wide Web Internet already in April of 1992, months before the book’s publication.

“I just read about William Gibson’s new book Agrippa (The Book of the Dead),” wrote one netizen. “I understand it’s going to be published on disk, with a virus that prevents it from being printed out. What do people think of this idea?”

“I seem to recall reading that this stuff about the virus-loaded book was an April Fools joke started here on the Internet,” replied another. “But nobody’s stopped talk about it, and even Tom Maddox, who knows Gibson, seemed to confirm its existence. Will the person who posted the original message please confirm or confess? Was this an April Fools joke or not?”

The Tom Maddox in question, who was indeed personally acquainted with Gibson, replied that the disappearing text “was part of a limited-edition, expensive artwork that Gibson believes was totally subscribed before ‘publication.’ Someone will publish it in more accessible form, I believe (and it will be interesting to see what the cyberpunk audience makes of it — it’s an autobiographical poem, about ten pages long).”

“What a strange world we live in,” concluded another netizen. Indeed.

The others making Agrippa didn’t need the Hacker to tell them with what enthusiasm the denizens of cyberspace would attack his code, vying for the cred that would come with being the first to break it. John Perry Barlow, a technology activist and co-founder of the Electronic Frontier Foundation, told Begos that unidentified “friends of his vow to buy and then run Agrippa through a Cray supercomputer to capture the code and crack the program.”

And yet for the first few months after the physical book’s release it remained uncracked. The thing was just so darn expensive, and the few museum curators and rare-books collectors who bought copies neither ran in the same circles as the hacking community nor were likely to entrust their precious disks to one of them.

Interest in the digital component of Agrippa remained high in the press, however, and, just as Tom Maddox had suspected all along, the collaborators eventually decided to give people unwilling to spend hundreds or thousands of dollars on the physical edition a chance to read — and to hear — William Gibson’s poem through another ephemeral electronic medium. On December 9, 1992, the Americas Society of New York City hosted an event called “The Transmission,” in which the magician and comedian Penn Jillette read the text of the poem as it scrolled across a big screen, bookended by question-and-answer sessions with Kevin Begos, Jr., the only member of the artistic trio behind Agrippa to appear at the event. The proceedings were broadcast via a closed-circuit satellite hookup to, as the press release claimed, “a street-corner shopfront on the Lower East Side, the Michael Carlos Museum in Atlanta, the Kitchen in New York City, a sheep farm in the Australian Outback, and others.” Continuing with the juxtaposition of old and new that had always been such a big thematic part of the Agrippa project — if a largely unremarked one — the press release pitched the event as a return to the days when catching a live transmission of one form or another had been the only way to hear a story, an era that had been consigned to the past by the audio- and videocassette.

When did you last hear Hopalong Cassidy on his NBC radio program? When did you last read to your children around a campfire? Have you been sorry that your busy schedule prevented a visit to the elders’ mud hut in New Guinea, where legends of times past are recounted? Have you ever looked closely at your telephone cable to determine exactly how voices and images can come out of the tiny fibers?

Naturally, recording devices were strictly prohibited at the event. Agrippa was still intended to be an ephemeral kairos moment, just like the radio broadcasts of yore.

Of course, it had always been silly to imagine that all traces of the poem could truly be blotted from existence after it had been viewed and/or heard by a privileged few. After all, people reading it on their monitor screens at home could buy video cameras too. Far from denying this reality, Begos imagined an eventual underground trade in fuzzy Agrippa videotapes, much like the bootleg concert tapes traded among fans of Bob Dylan and the Grateful Dead. Continuing with the example set by those artists, he imagined the bootleg trade being more likely to help than to hurt Agrippa‘s cultural cachet. But it would never come to that — for, despite Begos’s halfhearted precautions, the Transmission itself was captured as it happened.

Begos had hired a trio of student entrepreneurs from New York University’s Interactive Television Program to run the technical means of transmission of the Transmission. They went by the fanciful names of “Templar, Rosehammer, and Pseudophred” — names that could have been found in the pages of a William Gibson novel, and that should therefore have set off warning bells in the head of one Kevin Begos, Jr. Sure enough, the trio slipped a videotape into the camera broadcasting the proceedings. The very next morning, the text of the poem appeared on an underground computer bulletin board called MindVox, preceded by the following introduction:

Hacked & Cracked by
-Templar-
Rosehammer & Pseudophred
Introduction by Templar

When I first heard about an electronic book by William Gibson… sealed in an ominous tome of genetic code which smudges to the touch… which is encrypted and automatically self-destructs after one reading… priced at $1,500… I knew that it was a challenge, or dare, that would not go unnoticed. As recent buzzing on the Internet shows, as well as many overt attempts to hack the file… and the transmission lines… it’s the latest golden fleece, if you will, of the hacking community.

I now present to you, with apologies to William Gibson, the full text of AGRIPPA. It, of course, does not include the wonderful etchings, and I highly recommend purchasing the original book (a cheaper version is now available for $500). Enjoy.

And I’m not telling you how I did it. Nyah.

As Matthew Kirschenbaum, the foremost scholar of Agrippa, points out, there’s a delicious parallel to be made with the opening lines of Gibson’s 1981 short story “Johnny Mnemonic,” the first fully realized piece of cyberpunk literature he or anyone else ever penned: “I put the shotgun in an Adidas bag and padded it out with four pairs of tennis socks, not my style at all, but that was what I was aiming for: If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy. So I decided to get as crude as possible.” Templar was happy to let people believe he had reverse-engineered the Hacker’s ingenious encryption, but in reality his “hack” had consisted only of a fortuitous job contract and a furtively loaded videotape. Whatever works, right? “A hacker always takes the path of least resistance,” said Templar years later. “And it is a lot easier to ‘hack’ a person than a machine.”

Here, then, is one more irony to add to the collection. Rather than John Parry Barlow’s Cray supercomputer, rather than some genius hacker Gibson would later imagine had “cracked the supposedly uncrackable code,” rather than the “international legion of computer hackers” which the journal Cyberreader later claimed had done the job, Agrippa was “cracked” by a cameraman who caught a lucky break. Within days, it was everywhere in cyberspace. Within a month, it was old news online.

Before Kirschenbaum uncovered the real story, it had indeed been assumed for years, even by the makers ofAgrippa, that the Hacker’s encryption had been cracked, and that this had led to its widespread distribution on the Internet — led to this supposedly ephemeral text becoming as permanent as anything in our digital age. In reality, though, it appears that the Hacker’s protection wasn’t cracked at all until long after it mattered. In 2012, the University of Toronto sponsored a contest to crack the protection, which was won in fairly short order by one Robert Xiao. Without taking anything away from his achievement, it should be noted that he had access to resources — including emulators, disk images, and exponentially more sheer computing power — of which someone trying to crack the program on a real Macintosh in 1992 could hardly even have conceived. No protection is unbreakable, but the Hacker’s was certainly unbreakable enough for its purpose.

And so, with Xiao’s exhaustive analysis of the Hacker’s protection (“a very straightforward in-house ‘encryption’ algorithm that encodes data in 3-byte blocks”), the last bit of mystery surrounding Agrippa has been peeled away. How, we might ask at this juncture, does it hold up as a piece of art?

My own opinion is that, when divorced from its cultural reception and judged strictly as a self-standing artwork of the sort we might view in a museum, it doesn’t hold up all that well. This was a project pursued largely through correspondence by three artists who were all chasing somewhat different thematic goals, and it shows in the end result. It’s very hard to construct a coherent narrative of why all of these different elements are put together in this way. What do Ashbaugh’s DNA texts and paintings really have to do with Gibson’s meditation on family memory? (Begos made a noble attempt to answer that question at the Transmission, claiming that recordings of DNA strands would somehow become the future’s version of family snapshots — but if you’re buying that, I have some choice swampland to sell you.) And then, why is the whole thing packaged to look like H.P. Lovecraft’s Necronomicon? Rather than a unified artistic statement, Agrippa is a hodgepodge of ideas that too often pull against one another.

But is it really fair to divorce Agrippa so completely from its cultural reception all those years ago? Or, to put it another way, is it fair to judge Agrippa the artwork based solely upon Agrippa the slightly underwhelming material object? Matthew Kirschenbaum says that “the practical failure to realize much of what was initially planned for Agrippa allowed the project to succeed by leaving in its place the purest form of virtual work — a meme rather than an artifact.” He goes on to note thatAgrippa is “as much conceptual art as anything else.” I agree with him on both points, as I do with the online commenter from back in the day who called it “a piece of emergent performance art.” If art truly lives in our memory and our consciousness, then perhaps our opinion of Agrippa really should encompass the whole experience, including its transmission and its reception. Certainly this is the theory that underlies the whole notion of conceptual art — whether the artwork in question involves flying cows or disappearing poems.

It’s ironic — yes, there’s that word again — to note that Agrippa was once seen as an ominous harbinger of the digital future in the way that it showed information, divorced from physical media, simply disappearing into the ether, when the reality of the digital age has led to exactly the opposite problem, with every action we take and every word we write online being compiled into a permanent record of who we supposedly are — a slate which we can never wipe clean. And this digital permanence has come to apply to the poem of “Agrippa” as well, which today is never more than a search query away. Gibson:

The whole thing really was an experiment to see just what would happen. That whole Agrippa project was completely based on “let’s do this. What will happen?” Something happens. “What’s going to happen next?”

It’s only a couple thousand words long, and dangerously like poetry. Another cool thing was getting a bunch of net-heads to sit around and read poetry. I sort of liked that.

Having it wind up in permanent form, sort of like a Chinese Wall in cyberspace… anybody who wants to can go and read it, if they take the trouble. Free copies to everyone. So that it became, really, at the last minute, the opposite of the really weird, elitist thing many people thought it was.

So, Agrippa really was as uncontrollable and unpredictable for its creators as it was for anyone else. Notably, nobody made any money whatsoever off it, despite all the publicity and excitement it generated. In fact, Begos calls it a “financial disaster” for his company; the fallout soon forced him to abandon publishing altogether.

“Gibson thinks of it [Agrippa] as becoming a memory, which he believes is more real than anything you can actually see,” said Begos in a contemporary interview. Agrippa did indeed become a collective kairos moment for an emerging digital culture, a memory that will remain with us for a long, long time to come. Chris in the Morning would be proud.

(Sources: the book Mechanisms: New Media and the Forensic Imagination by Matthew G. Kirschenbaum; Starlog of September 1994; Details of June 1992; New York Times of November 18 1992. Most of all, The Agrippa Files of The University of California Santa Barbara, a huge archive of primary and secondary sources dealing with Agrippa, including the video of the original program in action on a vintage Macintosh.)

25 Comments

Posted by Jimmy Maher on September 7, 2018 in Digital Antiquaria, Interactive Fiction

Tags: agrippa, gibson

The Games of Windows

24Aug

There are two stories to be told about games on Microsoft Windows during the operating environment’s first ten years on the market. One of them is extremely short, the other a bit longer and far more interesting. We’ll dispense with the former first.

During the first half of the aforementioned decade — the era of Windows 1 and 2 — the big game publishers, like most of their peers making other kinds of software, never looked twice at Microsoft’s GUI. Why should they? Very few people were even using the thing.

Yet even after Windows 3.0 hit the scene in 1990 and makers of other kinds of software stampeded to embrace it, game publishers continued to turn up their noses. The Windows API made life easier in countless ways for makers of word processors, spreadsheets, and databases, allowing them to craft attractive applications with a uniform look and feel. But it certainly hadn’t been designed with games in mind; they were so far down on Microsoft’s list of priorities as to be nonexistent. Games were in fact the one kind of software in which uniformity wasn’t a positive thing; gamers craved diverse experiences. As a programmer, you couldn’t even force a Windows game to go full-screen. Instead you were stuck all the time inside the borders of the window in which it ran; this, needless to say, didn’t do much for immersion. It was true that Windows’s library for programming graphics, known as the Graphics Device Interface, or GDI, liberated programmers from the tyranny of the hardware — from needing to program separate modules to interact properly with every video standard in the notoriously diverse MS-DOS ecosystem. Unfortunately, though, GDI was slow; it was fine for business graphics, but unusable for most of the popular game genres.

For all these reasons, game developers, alone among makers of software, stuck obstinately with MS-DOS throughout the early 1990s, even as everything else in mainstream computing went all Windows, all the time. It wouldn’t be until after the first decade of Windows was over that game developers would finally embrace it, helped along both by a carrot (Microsoft was finally beginning to pay serious attention to their needs) and a stick (the ever-expanding diversity of hardware on the market was making the MS-DOS bare-metal approach to programming untenable).

End of story number one.

The second, more interesting story about games on Windows deals with different kinds of games from the ones the traditional game publishers were flogging to the demographic who were happy to self-identify as gamers. The people who came to play these different kinds of games couldn’t imagine describing themselves in those terms — and, indeed, would likely have been somewhat insulted if you had suggested it to them. Yet they too would soon be putting in millions upon millions of hours every year playing games, albeit more often in antiseptic adult offices than in odoriferous teenage bedrooms. Whatever; the fact was, they were still playing games. In fact, they were playing games enough to make Windows, that alleged game-unfriendly operating environment, quite probably the most successful gaming platform of the early 1990s in terms of sheer number of person-hours spent playing. And all the while the “hardcore” gamers barely even noticed this most profound democratization of computer gaming that the world had yet seen.


Microsoft Windows, like its inspiration the Apple Macintosh, used what’s known as a skeuomorphic interface — an interface built out of analogues to real-world objects, such as paper documents, a desktop, and a trashcan — to present a friendlier face of computing to people who may have been uncomfortable with the blinking command prompt of yore. It thus comes as little surprise that most of the early Windows games were skeuomorphic as well, being computerized versions of non-threateningly old-fashioned card and board games. In this, they were something of a throwback to the earliest days of personal computing in general, when hobbyists passed around BASIC versions of these same hoary classics, whose simple designs constituted some of the only ones that could be made to fit into the minuscule memories of the first microcomputers. With Windows, it seemed, the old had become new again, as computer gaming started over to try to capture a whole new demographic.

The very first game ever programmed to run in Windows is appropriately prototypical. When Tandy Trower took over the fractious and directionless Windows project at Microsoft in January of 1985, he found that a handful of applets that weren’t, strictly speaking, a part of the operating environment itself had already been completed. These included a calculator, a rudimentary text editor, and a computerized version of a board game called Reversi.

Reversi is an abstract game for two players that looks a bit like checkers and plays like a faster-paced, simplified version of the Japanese classic Go. Its origins are somewhat murky, but it was first popularized as a commercial product in late Victorian England. In 1971, an enterprising Japanese businessman made a couple of minor changes to the rules of this game that had long been considered in the public domain, patented the result, and started selling it as Othello. Under this name, it enjoys modest worldwide popularity to this day. Under both of its names, it also became an early favorite on personal computers, where its simple rules and relatively constrained possibility space lent themselves well to the limitations of programming in BASIC on a 16 K computer; Byte magazine, the bible of early microcomputer hackers, published a type-in Othello as early as its October 1977 issue.

A member of the Windows team named Chris Peters had decided to write a new version of the game under its original (and non-trademarked) name of Reversi in 1984, largely as one of several experiments — proofs of concept, if you will — into Windows application programming. Tandy Trower then pushed to get some of his team’s experimental applets, among them Reversi, included with the first release of Windows in November of 1985:

When the Macintosh was announced, I noted that Apple bundled a small set of applications, which included a small word processor called MacWrite and a drawing application called MacPaint. In addition, Lotus and Borland had recently released DOS products called Metro and SideKick that consisted of a small suite of character-based applications that could be popped up with a keyboard combination while running other applications. Those packages included a simple text editor, a calculator, a calendar, and a business-card-like database. So I went to [Bill] Gates and [Steve] Ballmer with the recommendation that we bundle a similar set of applets with Windows, which would include refining the ones already in development, as well as a few more to match functions comparable to these other products.

Interestingly, MacOS did not include any full-fledged games among its suite of applets; the closest it came was a minimalist sliding-number puzzle that filled all of 600 bytes and a maze on the “Guided Tour of Macintosh” disk that was described as merely a tool for learning to use the mouse. Apple, whose Apple II was found in more schools and homes than businesses and who were therefore viewed with contempt by much of the conservative corporate computing establishment, ran scared from any association of their latest machine with games. But Microsoft, on whose operating system MS-DOS much of corporate America ran, must have felt they could get away with a little more frivolity.

Still, Windows Reversi didn’t ultimately have much impact on much of anyone. Reversi in general was a game more suited to the hacker mindset than the general public, lacking the immediate appeal of a more universally known design, while the execution of this particular version of Reversi was competent but no more. And then, of course, very few people bought Windows 1 in the first place.

For a long time thereafter, Microsoft gave little thought to making more games for Windows. Reversi stuck around unchanged in the only somewhat more successful Windows 2, and was earmarked to remain in Windows 3.0 as well. Beyond that, Microsoft had no major plans for Windows gaming. And then, in one of the stranger episodes in the whole history of gaming, they were handed the piece of software destined to become almost certainly the most popular computer game of all time, reckoned in terms of person-hours played: Windows Solitaire.

The idea of a single-player card game, perfect for passing the time on long coach or railway journeys, had first spread across Europe and then the world during the nineteenth century. The game of Solitaire — or Patience, as it is still more commonly known in Britain — is really a collection of many different games that all utilize a single deck of everyday playing cards. The overarching name is, however, often used interchangeably with the variant known as Klondike, by far the most popular form of Solitaire.

Klondike Solitaire, like the many other variants, has many qualities that make it attractive for computer adaptation on a platform that gives limited scope for programmer ambition. Depending on how one chooses to define such things, a “game” of Solitaire is arguably more of a puzzle than an actual game, and that’s a good thing in this context: the fact that this is a truly single-player endeavor means that the programmer doesn’t have to worry about artificial intelligence at all. In addition, the rules are simple, and playing cards are fairly trivial to represent using even the most primitive computer graphics. Unsurprisingly, then, Solitaire was another favorite among the earliest microcomputer game developers.

It was for all the same reasons that a university student named Wes Cherry, who worked at Microsoft as an intern during the summer of 1988, decided to make a version of Klondike Solitaire for Windows that was similar to one he had spent a lot of time playing on the Macintosh. (Yes, even when it came to the games written by Microsoft’s interns, Windows could never seem to escape the shadow of the Macintosh.) There was, according to Cherry himself, “nothing great” about the code of the game he wrote; it was no better nor worse than a thousand other computerized Solitaire games. After all, how much could you really do with Solitaire one way or the other? It either worked or it didn’t. Thankfully, Cherry’s did, and even came complete with a selection of cute little card backs, drawn by his girlfriend Leslie Kooy. Asked what was the hardest aspect of writing the game, he points today to the soon-to-be-iconic cascade of cards that accompanied victory: “I went through all kinds of hoops to get that final cascade as fast as possible.” (Here we have a fine example of why most game programmers held Windows in such contempt…) At the end of his summer internship, he put his Solitaire on a server full of games and other little experiments that Microsoft’s programmers had created while learning how Windows worked, and went back to university.

Months later, some unknown manager at Microsoft sifted through the same server and discovered Cherry’s Solitaire. It seems that Microsoft had belatedly started looking for a new game — something more interesting than Reversi — to include with the upcoming Windows 3.0, which they intended to pitch as hard to consumers as businesspeople. They now decided that Solitaire ought to be that game. So, they put it through a testing process, getting Cherry to fix the bugs they found from his dorm room in return for a new computer. Meanwhile Susan Kare, the famed designer of MacOS’s look who was now working for Microsoft, gave Leslie Kooy’s cards a bit more polishing.

And so, when Windows 3.0 shipped in May of 1990, Solitaire was included. According to Microsoft, its purpose was to teach people how to use a GUI in a fun way, but that explanation was always something of a red herring. The fact was that computing was changing, machines were entering homes in big numbers once again, and giving people a fun game to play as part of an otherwise serious operating environment was no longer anathema. Certainly huge numbers of people would find Solitaire more than compelling enough as an end unto itself.

The ubiquity that Windows Solitaire went on to achieve — and still maintains to a large extent to this day [1]The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015. — is as difficult to overstate as it is to quantify. Microsoft themselves soon announced it to be the “most used” Windows application of all, easily besting heavyweight businesslike contenders like Word, Excel, Lotus 1-2-3, and WordPerfect. The game became a staple of office life all over the world, to be hauled out during coffee breaks and down times, to be kept always lurking minimized in the background, much to the chagrin of officious middle managers. By 1994, a Washington Post article would ask, only half facetiously, if Windows Solitaire was sowing the seeds of “the collapse of American capitalism.”

“Yup, sure,” says Frank Burns, a principal in the region’s largest computer bulletin board, the MetaNet. “You used to see offices laid out with the back of the video monitor toward the wall. Now it’s the other way around, so the boss can’t see you playing Solitaire.”

“It’s swallowed entire companies,” says Dennis J. “Gomer” Pyles, president of Able Bodied Computers in The Plains, Virginia. “The water-treatment plant in Warrenton, I installed [Windows on] their systems, and the next time I saw the client, the first thing he said to me was, ‘I’ve got 2000 points in Solitaire.'”

Airplanes full of businessmen resemble not board meetings but video arcades. Large gray men in large gray suits — lugging laptops loaded with spreadsheets — are consumed by beating their Solitaire scores, flight attendants observe.

Some companies, such as Boeing, routinely remove Solitaire from the Windows package when it arrives, or, in some cases, demand that Microsoft not even ship the product with the game inside. Even PC Magazine banned game-playing during office hours. “Our editor wanted to lessen the dormitory feel of our offices. Advertisers would come in and the entire research department was playing Solitaire. It didn’t leave the best impression,” reported Tin Albano, a staff editor.

Such articles have continued to crop up from time to time in the business pages ever since — as, for instance, the time in 2006 when New York City Mayor Michael Bloomberg summarily terminated an employee for playing Solitaire on the job, creating a wave of press coverage both positive and negative. But the crackdowns have always been to no avail; it’s as hard to imagine the modern office without Microsoft Solitaire as it is to imagine it without Microsoft Office.

Which isn’t to say that the Solitaire phenomenon is limited to office life. My retired in-laws, who have quite possibly never played another computer game in either of their lives, both devote hours every week to Solitaire in their living room. A Finnish study from 2007 found it to be the favorite game of 36 percent of women and 13 percent of men; no other game came close to those numbers. Even more so than Tetris, that other great proto-casual game of the early 1990s, Solitaire is, to certain types of personality at any rate, endlessly appealing. Why should that be?

To begin to answer that question, we might turn to the game’s pre-digital past. Whitmore Jones’s Games of Patience for One or More Players, a compendium of many Solitaire variants, was first published in 1898. Its introduction is fascinating, presaging much of the modern discussion about Microsoft Solitaire and casual gaming in general.

In days gone by, before the world lived at the railway speed as it is doing now, the game of Patience was looked upon with somewhat contemptuous toleration, as a harmless but dull amusem*nt for idle ladies, and was ironically described as “a roundabout method of sorting the cards”; but it has gradually won for itself a higher place. For now, when the work, and still more the worries, of life have so enormously increased and multiplied, the value of a pursuit interesting enough to absorb the attention without unduly exciting the brain, and so giving the mind a rest, as it were, a breathing space wherein to recruit its faculties, is becoming more and more recognised and appreciated.

In addition to illustrating how concerns about the pace of contemporary life and nostalgia for the good old days are an eternal part of the human psyche, this passage points to the heart of Solitaire’s appeal, whether played with real cards or on a computer: the way that it can “absorb the attention without unduly exciting the brain.” It’s the perfect game to play when killing time at the end of the workday, as a palate cleanser between one task and another, or, as in the case of my in-laws, as a semi-active accompaniment to the idle practice of watching the boob tube.

Yet Solitaire isn’t a strictly rote pursuit even for those with hundreds of hours of experience playing it; if it was, it would have far less appeal. Indeed, it isn’t even particularly fair. About 20 percent of shuffles will result in a game that isn’t winnable at all, and Wes Cherry’s original computer implementation at least does nothing to protect you from this harsh mathematical reality. Still, when you get stuck there’s always that “Deal” menu option waiting for you up there in the corner, a tempting chance to reshuffle the cards and try your hand at a new combination. So, while Solitaire is the very definition of a low-engagement game, it’s also a game that has no natural end point; somehow the “Deal” option looks equally tempting whether you’ve just won or just lost. After being sucked in by its comfortable similarity to an analog game of cards almost everyone of a certain age has played, people can and do proceed to keep playing it for a lifetime.

As in the case of Tetris, there’s room to debate whether spending so many hours upon such a repetitive activity as playing Solitaire is psychologically healthy. For my own part, I avoid it and similar “time waster” games as just that — a waste of time that doesn’t leave me feeling good about myself afterward. By way of another perspective, though, there is this touching comment that was once left by a Reddit user to Wes Cherry himself:

I just want to tell you that this is the only game I play. I have autism and don’t game due to not being able to cope with the sensory processing – but Solitaire is “my” game.

I have a window of it open all day, every day and the repetitive clicking is really soothing. It helps me calm down and mentally function like a regular person. It makes a huge difference in my quality of life. I’m so glad it exists. Never thought there would be anyone I could thank for this, but maybe I can thank you. *random Internet stranger hugs*

Cherry wrote Solitaire in Microsoft’s offices on company time, and thus it was always destined to be their intellectual property. He was never paid anything at all, beyond a free computer, for creating the most popular computer game in history. He says he’s fine with this. He’s long since left the computer industry, and now owns and operates a cider distillery on Vashon Island in Puget Sound.

The popularity of Solitaire convinced Microsoft, if they needed convincing, that simple games like this had a place — potentially a profitable place — in Windows. Between 1990 and 1992, they released four “Microsoft Entertainment Packs,” each of which contained seven little games of varying degrees of inspiration, largely cobbled together from more of the projects coded by their programmers in their spare time. These games were the polar opposite of the ones being sold by traditional game publishers, which were growing ever more ambitious, with increasingly elaborate storylines and increasing use of video and sound recorded from the real world. The games from Microsoft were instead cast in the mold of Cherry’s Solitaire: simple games that placed few demands on either their players or the everyday office computers Microsoft envisioned running them, as indicated by the blurbs on the boxes: “No more boring coffee breaks!”; “You’ll never get out of the office!” Bruce Ryan, the manager placed in charge of the Entertainment Packs, later summarized the target demographic as “loosely supervised businesspeople.”

The centerpiece of the first Entertainment Pack was a passable version of Tetris, created under license from Spectrum Holobyte, who owned the computer rights to the game. Wes Cherry, still working out of his dorm room, provided a clone of another older puzzle game calledPipe Dream to be the second Entertainment Pack’s standard bearer; he was even compensated this time, at least modestly. As these examples illustrate, the Entertainment Packs weren’t conceptually ambitious in the least, being largely content to provide workmanlike copies of established designs from both the analog and digital realms. Among the other games included were Solitaire variants other than Klondike, a clone of the Activision tile-matching hitShanghai, a 3D Tic-tac-toe game, a golf game (for the ultimate clichéd business-executive experience), and even a version of John Horton Conway’s venerable study of cellular life cycles, better known as the game of Life. (One does have to wonder what bored office workers made of that).

Established journals of record like Computer Gaming World barely noticed the Entertainment Packs, but they sold more than half a million copies in two years, equaling or besting the numbers of the biggest hardcore hits of the era, such as the Wing Commander series. Yet even that impressive number rather understates the popularity of Microsoft’s time wasters. Given that they had no copy protection, and given that they would run on any computer capable of running Windows, the Entertainment Packs were by all reports pirated at a mind-boggling rate, passed around offices like cakes baked for the Christmas potluck.

For all their success, though, nothing on any of the Entertainment Packs came close to rivaling Wes Cherry’s original Solitaire game in terms of sheer number of person-hours played. The key factor here was that the Entertainment Packs were add-on products; getting access to these games required motivation and effort from the would-be player, along with — at least in the case of the stereotypical coffee-break player from Microsoft’s own promotional literature — an office environment easygoing enough that one could carry in software and install it on one’s work computer. Solitaire, on the other hand, came already included with every fresh Windows installation, so long as an office’s system administrators weren’t savvy and heartless enough to seek it out and delete it. The archetypal low-effort game, its popularity was enabled by the fact that it also took no effort whatsoever to gain access to it. You just sort of stumbled over it while trying to figure out this new Windows thing that the office geek had just installed on your faithful old computer, or when you saw your neighbor in the next cubicle playing and asked what the heck she was doing. Five minutes later, it had its hooks in you.

It was therefore significant when Microsoft added a new game — or rather an old one — to 1992’s Windows 3.1. Minesweeper had actually debuted as part of the first Entertainment Pack, where it had become a favorite of quite a number of players. Among them was none other than Bill Gates himself, who became so addicted that he finally deleted the game from his computer — only to start getting his fix on his colleagues’ machines. (This creates all sorts of interesting fuel for the imagination. How do you handle it when your boss, who also happens to be the richest man in the world, is hogging your computer to play Minesweeper?) Perhaps due to the CEO’s patronage, Minesweeper became part of Windows’s standard equipment in 1992, replacing the unloved Reversi.

Unlike Solitaire and most of the Entertainment Pack games, Minesweeper was an original design, written by staff programmers Robert Donner and Curt Johnson in their spare time. That said, it does owe something to the old board game Battleship, to very early computer games like Hunt the Wumpus, and in particular to a 1985 computer game called Relentless Logic. You click on squares in a grid to uncover their contents, which can be one of three things: nothing at all, indicating that neither this square nor any of its adjacent squares contain mines; a number, indicating that this square is clear but said number of its adjacent squares do contain mines; or — unlucky you! — an actual mine, which kills you, ending the game. Like Solitaire, Minesweeper straddles the line — if such a line exists — between game and puzzle, and it isn’t a terribly fair take on either: while the program does protect you to the extent that the first square you click will never contain a mine, it’s possible to get into a situation through no fault of your own where you can do nothing but play the odds on your next click. But, unlike Solitaire, Minesweeper does have more of the trappings of a conventional videogame, including a timer which encourages you to play quickly to achieve the maximum score.

Doubtless because of those more overt videogame trappings, Minesweeper never became quite the office fixture that Solitaire did. Those who did get sucked in by it, however, found it even more addictive, perhaps not least because it does demand a somewhat higher level of engagement. It too became an iconic part of life with Microsoft Windows, and must rank high on any list of most-played computer games of all time, if the data only existed to compile such a thing. After all, it did enjoy one major advantage over even Solitaire for office workers with uptight bosses: it ran in a much smaller window, and thus stood out far less on a crowded screen when peering eyes glanced into one’s cubicle.

Microsoft included a third game with Windows for Workgroups 3.1, a variant intended for a networked office environment. True to that theme, Hearts was a version of the evergreen card game which could be played against computer opponents, but which was most entertaining when played together by up to four real people, all on separate computers. Its popularity was somewhat limited by the fact that it came only with Windows for Workgroups, but, again, that adjective is relative. By any normal computer-gaming standard, Hearts was hugely popular indeed for quite some years, serving for many people as their introduction to the very concept of online gaming — a concept destined to remake much of the landscape of computer gaming in general in years to come. Certainly I can remember many a spirited Hearts tournament at my workplaces during the 1990s. The human, competitive element always made Hearts far more appealing to me than the other games I’ve discussed in this article.

But whatever your favorite happened to be, the games of Windows became a vital part of a process I’ve been documenting in fits and starts over the last year or two of writing this history: an expansion of the demographics that were playing games, accomplished not by making parents and office workers suddenly fall in love with the massive, time-consuming science-fiction or fantasy epics upon which most of the traditional computer-game industry remained fixated, but rather by meeting them where they lived. Instead of five-course meals, Microsoft provided ludic snacks suited to busy lives and limited attention spans. None of the games I’ve written about here are examples of genius game design in the abstract; their genius, to whatever extent it exists, is confined to worming their way into the psyche in a way that can turn them into compulsions. Yet, simply by being a part of the software that just about everybody, with the exception of a few Macintosh stalwarts, had on their computers in the 1990s, they got hundreds of millions of people playing computer games for the first time. The mainstream Ludic Revolution, encompassing the gamification of major swaths of daily life, began in earnest on Microsoft Windows.

(Sources: the book A Casual Revolution: Reinventing Video Games and Their Players by Jesper Juul; Byte of October 1977; Computer Gaming World of September 1992; Washington Post of March 9 1994; New York Times of February 10 2006; online articles at Technologizer, The Verge, B3TA, Reddit, Game Set Watch, Tech Radar, Business Insider, and Danny Glasser’s personal blog.)

Footnotes

Footnotes
1 The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015.

45 Comments

Posted by Jimmy Maher on August 24, 2018 in Digital Antiquaria, Interactive Fiction

Tags: microsoft, windows

Doing Windows, Part 9: Windows Comes Home

17Aug

This series of articles so far has been a story of business-oriented personal computing. Corporate America had been running for decades on IBM before the IBM PC appeared, so it was only natural that the standard IBM introduced would be embraced as the way to get serious, businesslike things done on a personal computer. Yet long before IBM entered the picture, personal computing in general had been pioneered by hackers and hobbyists, many of whom nursed grander dreams than giving secretaries a better typewriter or giving accountants a better way to add up figures. These pioneers didn’t go away after 1981, but neither did they embrace the IBM PC, which most of them dismissed as technically unimaginative and aesthetically disastrous. Instead they spent the balance of the 1980s using computers like the Apple II, the Commodore 64, the Commodore Amiga, and the Atari ST to communicate with one another, to draw pictures, to make music, and of course to write and play lots and lots of games. Dwarfed already in terms of dollars and cents at mid-decade by the business-computing monster the IBM PC had birthed, this vibrant alternative computing ecosystem — sometimes called home computing, sometimes consumer computing — makes a far more interesting subject for the cultural historian of today than the world of IBM and Microsoft, with its boring green screens and boring corporate spokesmen running scared from the merest mention of digital creativity. It’s for this reason that, a few series like this one aside, I’ve spent the vast majority of my time on this blog talking about the cultures of creative computing rather than those of IBM and Microsoft.

Consumer computing did enjoy one brief boom in the 1980s. From roughly 1982 to 1984, a narrative took hold within the mainstream media and the offices of venture capitalists alike that full-fledged computers would replace the Atari VCS and other game consoles in American homes on a massive scale. After all, computers could play games just like the consoles, but they alone could also be used to educate the kids, write school reports and letters, balance the checkbook, and — that old favorite to which the pundits returned again and again — store the family recipes.

All too soon, though, the limitations of the cheap 8-bit computers that had fueled the boom struck home. As a consumer product, those early computers with their cryptic blinking command prompts were hopeless; at least with an Atari VCS you could just put a cartridge in the slot, turn it on, and play. There were very few practical applications for which they weren’t more trouble than they were worth. If you needed to write a school report, a standalone word-processing machine designed for that purpose alone was often a cheaper and better solution, and the family accounts and recipes were actually much easier to store on paper than in a slow, balky computer program. Certainly paper was the safer choice over a pile of fragile floppy disks.

So, what we might call the First Home Computer Revolution fizzled out, with most of the computers that had been purchased over its course making the slow march of shame from closet to attic to landfill. That minority who persisted with their new computers was made up of the same sorts of personalities who had had computers in their homes before the boom — for the one concrete thing the First Home Computer Revolution had achieved was to make home computers in general more affordable, and thus put them within the reach of more people who were inclined toward them anyway. People with sufficient patience continued to find home computers great for playing games that offered more depth than the games on the consoles, while others found them objects of wonder unto themselves, new oceans just waiting to have their technological depths plumbed by intrepid digital divers. It was mostly young people, who had free time on their hands, who were open to novelty, who were malleable enough to learn something new, and who were in love with escapist fictions of all stripes, who became the biggest home-computer users.

Their numbers grew at a modest pace every year, but the real money, it was now clear, was in business computing. Why try to sell computers piecemeal to teenagers when you could sell them in bulk to corporations? IBM, after having made one abortive stab at capturing home computing as well via the ill-fated PCjr, went where the money was, and all but a few other computer makers — most notable among these home-computer loyalists were Commodore, Atari, and Radio Shack — followed them there. The teenagers, for their part, responded to the business-computing majority’s contempt in kind, piling scorn onto the IBM PC’s ludicrously ugly CGA graphics and its speaker that could do little more than beep and fart at you, all while embracing their own more colorful platforms with typical adolescent zeal.

As the 1980s neared their end, however, the ugly old MS-DOS computer started down an unanticipated road of transformation. In 1987, as part of the misbegotten PS/2 line, IBM introduced a new graphics standard called VGA that, with up to 256 onscreen colors from a palette of more than 260,000, outdid all of the common home computers of the time. Soon after, enterprising third parties like Ad Lib and Creative Labs started making add-on sound cards for MS-DOS machines that could make real music and — just as important for game fanatics — real explosions. Many a home hacker woke up one morning to realize that the dreaded PC clone suddenly wasn’t looking all that bad. No, the technical architecture wasn’t beautiful, but it was robust and mature, and the pressure of having dozens of competitors manufacturing machines meeting the standard kept the bang-for-your-buck ratio very good. And if you — or your parents — did want to do any word processing or checkbook balancing, the software for doing so was excellent, honed by years of catering to the most demanding of corporate users. Ditto the programming tools that were nearer to a hacker’s heart; Borland’s Turbo Pascal alone was a thing of wonder, better than any other programming environment on any other personal computer.

Meanwhile 8-bit home computers like the Apple II and the Commodore 64 were getting decidedly long in the tooth, and the companies that made them were doing a peculiarly poor job of replacing them. The Apple Macintosh was so expensive as to be out of reach of most, and even the latest Apple II, known as the IIGS, was priced way too high for what it was; Apple, having joined the business-computing rat race, seemed vaguely embarrassed by the continuing existence of the Apple II, the platform that had made them. The Commodore Amiga 500 was perhaps a more promising contender to inherit the crown of the Commodore 64, but its parent company had mismanaged their brand almost beyond hope of redemption in the United States.

So, in 1988 and 1989 MS-DOS-based computing started coming home, thanks both to its own sturdy merits and a lack of compelling alternatives from the traditional makers of home computers. The process was helped along by Sierra Online, a major publisher of consumer software who had bet big and early on the MS-DOS standard conquering the home in the end, and were thus out in front of its progress now with a range of appealing games that took full advantage of the new graphics and sound cards. Other publishers, reeling before a Nintendo onslaught that was devastating the remnants of the 8-bit software market, soon followed their lead. By 1990, the vast majority of the American consumer-software industry had joined their counterparts in business software in embracing MS-DOS as their platform of first — often, of only — priority.

Bill Gates had always gone where the most money was. In years past, the money had been in business computing, and so Microsoft, after experimenting briefly with consumer software in the period just before the release of the IBM PC, had all but ignored the consumer market in favor of system software and applications targeted squarely at corporate America. Now, though, the times were changing, as home computers became powerful and cheap enough to truly go mainstream. The media was buzzing about the subject as they hadn’t for years; everywhere it was multimedia this, CD-ROM that. Services like Prodigy and America Online were putting a new, friendlier face on the computer as a tool for communicating and socializing, and game developers were buzzing about an emerging new form of mass-market entertainment, a merger of Silicon Valley and Hollywood. Gates wasn’t alone in smelling a Second Home Computer Revolution in the wind, one that would make the computer a permanent fixture of modern American home life in all the ways the first had failed to do so.

This, then, was the zeitgeist into which Microsoft Windows 3.0 made its splashy debut in May of 1990. It was perfectly positioned both to drive the Second Home Computer Revolution and to benefit from it. Small wonder that Microsoft undertook a dramatic branding overhaul this year, striving to project a cooler, more entertaining image — an image appropriate for a company which marketed not to other companies but to individual consumers. One might say that the Microsoft we still know today was born on May 22, 1990, when Bill Gates strode onto a stage — tellingly, not a stage at Comdex or some other stodgy business-oriented computing event — to introduce the world to Windows 3.0 over a backdrop of confetti cannons, thumping music, and huge projection screens.

The delirious sales of Windows 3.0 that followed were not — could not be, given their quantity — driven exclusively by sales to corporate America. The world of computing had turned topsy-turvy; consumer computing was where the real action was now. Even as they continued to own business-oriented personal computing, Microsoft suddenly dominated in the home as well, thanks to the capitulation without much of a fight of all of the potential rivals to MS-DOS and Windows. Countless copies of Windows 3.0 were sold by Microsoft directly to Joe Public to install on his existing home computer, through a toll-free hotline they set up for the purpose. (“Have your credit card ready and call!”) Even more importantly, as new computers entered American homes in mass quantities for the second time in history, they did so with Windows already on their hard drives, thanks to Microsoft’s longstanding deals with the companies that made them.

In April of 1992, Windows 3.1 appeared, sporting as one of its most important new features a set of “multimedia extensions” — this meaning tools for recording and playing back sounds, for playing audio CDs, and, most of all, for running a new generation of CD-ROM-based software sporting digitized voices and music and video clips — which were plainly aimed at the home rather than the business user. Although Windows 3.1 wasn’t as dramatic a leap forward as its predecessor had been, Microsoft nevertheless hyped it to the skies in the mass media, rolling out an $8 million television-advertising campaign among other promotional strategies that would have been unthinkable from the business-focused Microsoft of just a few years earlier. It sold even faster than had its predecessor.

Released in April of 1992, Windows 3.1 was the ultimate incarnation of Windows’s third generation. (A version 3.11 was released the following year, but it confined itself to bug fixes and modest performance tweaks, introducing no significant new features.) It dropped support for 8088-based machines, and with it the old “real mode” of operation; it now ran only in protected mode or 386 enhanced mode. It made welcome strides in terms of stability, even as it still left much to be desired on that front. And this Windows was the last to be sold as an add-on to an MS-DOS which had to be purchased separately. Consumer-grade incarnations of Windows would continue to be built on top of MS-DOS for the rest of the decade, but from Windows 95 on Microsoft would do a better job of hiding their humble foundation by packaging the whole software stack together as a single product.

Stuff like this is the reason Windows always took such a drubbing in comparison to other, slicker computing platforms. In truth, Microsoft was doing the best they could to support a bewildering variety of hardware, a problem with which vendors of turnkey systems like Apple didn’t have to contend. Still, it’s never a great look to have to tell your customers, “If this crashes your computer, don’t worry about it, just try again.” Much the same advice applied to daily life with Windows, noted the scoffers.

Microsoft was rather shockingly lax about validating Windows 3 installations. The product had no copy protection of any sort, meaning one person in a neighborhood could (and often did) purchase a copy and share it with every other house on the block. Others in the industry had a sneaking suspicion that Microsoft really didn’t mind that much if Windows was widely pirated among their non-business customers — that they’d rather people run pirated copies of Windows than a competing product. It was all about achieving the ubiquity which would open the door to all sorts of new profit potential through the sale of applications. And indeed, Windows 3 was pirated like crazy, but it also became thoroughly ubiquitous. As for the end to which Windows’s ubiquity was the means: by the time applications came to represent 25 percent of Microsoft’s unit sales, they already accounted for 51 percent of their revenue. Bill Gates always had an instinct for sniffing out where the money was.

Probably the most important single enhancement in Windows 3.1 was its TrueType fonts. The rudimentary bitmap fonts which shipped with older versions looked… not all that nice on the screen or on the page, reportedly due to Bill Gates’s adamant refusal to pay a royalty for fonts to an established foundry like Adobe, as Apple had always done. This decision led to a confusion of aftermarket fonts in competing formats. If you used some of these more stylish fonts in a document, you couldn’t share that document with anyone else unless she also had installed the same fonts. So, you could either share ugly documents or keep nice-looking ones to yourself. Some choice! Thankfully, TrueType came along to fix all that, giving Macintosh users at least one less thing to laugh at when it came to Windows.

The TrueType format was the result of an unusual cooperative project led by Microsoft and Apple — yes, even as they were battling one another in court. The system of glyphs and the underlying technology to render them were intended to break the stranglehold Adobe Systems enjoyed over high-end printing; Adobe charged a royalty of up to $100 per gadget that employed their own PostScript font system, and were widely seen in consequence as a retrograde force holding back the entire desktop-publishing and GUI ecosystem. TrueType would succeed splendidly in its monopoly-busting goal, to such an extent that it remains the standard for fonts on Microsoft Windows and Apple’s OS X to this day. Bill Gates, no stranger to vindictiveness, joked that “we made [the widely disliked Adobe head] John Warnock cry.”

The other big addition to Windows 3.1 was the “multimedia extensions.” These let you do things like record sounds using an attached microphone and play your audio CDs on your computer. That they were added to what used to be a very businesslike operating environment says much about how important home users had become to Microsoft’s strategy.

In a throwback to an earlier era of computing, MS-DOS still shipped with a copy of BASIC included, and Windows 3.1 automatically found it and configured it for easy access right out of the box — this even though home computing was now well beyond the point where most users would ever try to become programmers. Bill Gates’s sentimental attachment to BASIC, the language on which he built his company before the IBM PC came along, has often been remarked upon by his colleagues, especially since he wasn’t normally a man given to much sentimentality. It was the widespread perception of Borland’s Turbo Pascal as the logical successor to BASIC — the latest great programming tool for the masses — that drove the longstanding antipathy between Gates and Borland’s flamboyant leader, Philippe Kahn. Later, it was supposedly at Gates’s insistence that Microsoft’s Visual BASIC, a Pascal-killer which bore little resemblance to BASIC as most people knew it, nevertheless bore the name.

Windows for Workgroups — a separate, pricier version of the environment aimed at businesses — was distinguished by having built-in support for networking. This wasn’t, however, networking as we think of it today. It was rather intended to connect machines together only in a local office environment. No TCP/IP stack — the networking technology that powers the Internet — was included.

But you could get on the Internet with the right additional software. Here, just for fun, I’m trying to browse the web using Internet Explorer 5 from 1999, the last version made for Windows 3. Google is one of the few sites that work at all — albeit, as you can see, not very well.

All this success — this reality of a single company now controlling almost all personal computing, in the office and in the home — brought with it plenty of blowback. The metaphor of Microsoft as the Evil Empire, and of Bill Gates as the computer industry’s very own Darth Vader, began in earnest in these years of Windows 3’s dominance. Neither Gates nor his company had ever been beloved among their peers, having always preferred making money to making friends. Now, though, the naysayers came out in force. Bob Metcalfe, a Xerox PARC alum famous in hacker lore as the inventor of the Ethernet networking protocol, talked about Microsoft’s expanding “death grip” on innovation in the computer industry. Indeed, zombie imagery was prevalent among many of Microsoft’s rivals; Mitch Kapor of Lotus called the new Windows-driven industry “the kingdom of the dead”: “The revolution is over, and free-wheeling innovation in the software industry has ground to a halt.” Any number of anonymous commenters mused about doing Gates any number of forms of bodily harm. “It’s remarkable how widespread the negative feelings toward Microsoft are,” mused Stewart Alsop. “No one wants to work with Microsoft anymore,” said noted Gates-basher Phillipe Kahn of Borland. “We sure won’t. They don’t have any friends left.” Channeling such sentiments, Business Month magazine cropped his nerdy face onto a body-builder’s body and labeled him the “Silicon Bully” on its cover: “How long can Bill Gates kick sand in the face of the computer industry?”

Setting aside the jealousy that always follows great success, even setting aside for the moment the countless ways in which Microsoft really did play hardball with their competitors, something about Bill Gates rubbed many people the wrong way on a personal, visceral level. In keeping with their new, consumer-friendly image, Microsoft had hired consultants to fix up his wardrobe and work on his speaking style — not to mention to teach him the value of personal hygiene — and he could now get through a canned presentation ably enough. When it came to off-the-cuff interactions, though, he continued to strike many as insufferable. To judge him on the basis of his weedy physique and nasally speaking voice — the voice of the kid who always had to show how smart he was to the rest of the class — was perhaps unfair. But one certainly could find him guilty of a thoroughgoing lack of graciousness.

His team of PR coaches could have told him that, when asked who had contributed the most to the personal-computer revolution, he ought to politely decline to answer, or, even better, modestly reflect on the achievements of someone like his old friend Steve Jobs. But they weren’t in the room with him one day when that exact question was put to him by a smiling reporter, and so, after acknowledging that it really should be answered by “others less biased than me,” he proceeded to make the case for himself: “I will say that I started the first microcomputer-software company. I put BASIC in micros before 1980. I was influential in making the IBM PC a 16-bit machine. My DOS is in 50 million computers. I wrote software for the Mac.” I, I, I. Everything he said was true, at least if one presumed that “I” meant “Bill Gates and the others at Microsoft” in this context. Yet there was something unappetizing about this laundry list of achievements he could so easily rattle off, and about the almost pathological competitiveness it betrayed. We love to praise ambition in the abstract, but most of us find such naked ambition as that constantly displayed by Gates profoundly off-putting. The growing dislike for Microsoft in the computer industry and even in much of the technology press was fueled to a large extent by a personal aversion to their founder.

Which isn’t to say that there weren’t valid grounds for concern when it came to Microsoft’s complete dominance of personal-computer system software. Comparisons to the Standard Oil trust of the Gilded Age were in the air, so much so that by 1992 it was already becoming ironically useful for Microsoft to keep the Macintosh and OS/2 alive and allow them their paltry market share, just so the alleged monopolists could point to a couple of semi-viable competitors to Windows. It was clear that Microsoft’s ambitions didn’t end with controlling the operating system installed on the vast majority of computers in the country and, soon, the world. On the contrary, that was only a means to their real end. They were already using their status as the company that made Windows to cut deep into the application market, invading territory that had once belonged to the likes of Lotus 1-2-3 and WordPerfect. Now, those names were slowly being edged out by Microsoft Excel and Microsoft Word. Microsoft wanted to own more or less all of the software on your computer. Any niche outside developers that remained in computing’s new order, it seemed, would do so at Microsoft’s sufferance. The established makers of big-ticket business applications would have been chilled if they had been privy to the words spoken by Mike Maples, Microsoft’s head of applications, to his own people: “If someone thinks we’re not after Lotus and after WordPerfect and after Borland, they’re confused. My job is to get a fair share of the software applications market, and to me that’s 100 percent.” This was always the problem with Microsoft. They didn’t want to compete in the markets they entered; they wanted to own them.

Microsoft’s control of Windows gave them all sorts of advantages over other application developers which may not have been immediately apparent to the non-technical public. Take, for instance, the esoteric-sounding technology of Object Linking and Embedding, or OLE, which debuted with Windows 3.0 and still exists in current versions. OLE allows applications to share all sorts of dynamic data between themselves. Thanks to it, a word-processor document can include charts and graphs from a spreadsheet, with the one updating itself automatically when the other gets updated. Microsoft built OLE support into new versions of Word and Excel that accompanied Windows 3.0’s release, but refused for many months to tell outside developers how to use it. Thus Microsoft’s applications had hugely desirable capabilities which their competitors did not for a long, long time. Similar stories played out again and again, driving the competition to distraction while Bill Gates shrugged his shoulders and played innocent. “We bend over backwards to make sure we’re not getting special advantage,” he said, while Steve Ballmer talked about a “Chinese wall” between Microsoft’s application and system programmers — a wall which people who had actually worked there insisted simply didn’t exist.

On March 1, 1991, news broke that the Federal Trade Commission was investigating Microsoft for anti-trust violations and monopolistic practices. The investigators specifically pointed to that agreement with IBM that had been announced at the Fall 1989 Comdex, to target low-end computers with Microsoft’s Windows and high-end computers with the two companies’ joint operating system OS/2 — ironically, an “anti-competitive” initiative that Microsoft had never taken all that seriously. Once the FTC started digging, however, they found that there was plenty of other evidence to be turned up, from both the previous decade and this new one.

There was, for instance, little question that Microsoft had always leveraged their status as the maker of MS-DOS in every way they could. When Windows 3.0 came out, they helped to ensure its acceptance by telling hardware makers that the only way they would continue to be allowed to buy MS-DOS for pre-installation on their computers was to buy Windows and start pre-installing that too. Later, part of their strategy for muscling into the application market was to get Microsoft Works, a stripped-down version of the full Microsoft Office suite, pre-installed on computers as well. How many people were likely to go out and buy Lotus 1-2-3 or WordPerfect when they already had similar software on their computer? Of course, if they did need something more powerful, said the little card included with every computer, they could have the more advanced version of Microsoft Works for the cost of a nominal upgrade fee…

And there were other, far more nefarious stories to tell. There was, for instance, the tale of DR-DOS, a 1988 alternative to MS-DOS from Digital Research which was compatible with Microsoft’s operating system but offered a lot of welcome enhancements. Microsoft went after any clone maker who tried to offer DR-DOS pre-installed on their machines with both carrots (they would undercut Digital Research’s price to the point of basically giving MS-DOS away if necessary) and sticks (they would refuse to license them the upcoming, hotly anticipated Windows 3.0 if they persisted in their loyalty to Digital Research). Later, once the DR-DOS threat had been quelled, most of the features that had made it so desirable turned up in the next release of MS-DOS. Digital Research — a company which Bill Gates seemed to delight in tormenting — had once again been, in the industry’s latest parlance, “Microslimed.”

But Digital Research was neither the first nor the last such company. Microsoft, it was often claimed, had a habit of negotiating with smaller companies under false pretenses, learning what made their technology tick under the guise of due diligence, and then launching their own product based on what they had learned. In early 1990, Microsoft told Intuit, a maker of a hugely successful money-management package called Quicken, that they were interested in acquiring them. After several weeks of negotiations, including lots of discussions about how Quicken was programmed, how it was used in the wild, and what marketing strategies had been most effective, Microsoft abruptly broke off the talks, saying they “couldn’t find a way to make it work.” Before the end of 1990, they had announced Microsoft Money, their own money-management product.

More and more of these types of stories were being passed around. A startup who called themselves Go came to Microsoft with a pen-based computing interface. (The latter was all the rage at the time; Apple as well was working on something called the Newton, a sort of pen-based proto-iPad that, like all of the other initiatives in this direction, would turn into an expensive failure.) After spending weeks examining Go’s technology, Microsoft elected not to purchase it or sign them to a contract. But, just days later, they started an internal project to create a pen-based interface for Windows, headed by the engineer who had been in charge of “evaluating” Go’s technology. A meme was emerging, by no means entirely true but perhaps not entirely untrue either, of Microsoft as a company better at doing business than doing technology, who would prefer to copy the innovations of others than do the hard work of coming up with their own ideas.

In a way, though, this very quality was a source of strength for Microsoft, the reason that corporate clients flocked to them now like they once had to IBM; the mantra that “no one ever got fired for buying IBM” was fast being replaced in corporate America by “no one ever got fired for buying Microsoft.” “We don’t do innovative stuff, like completely new revolutionary stuff,” Bill Gates admitted in an unguarded moment. “One of the things we are really, really good at doing is seeing what stuff is out there and taking the right mix of good features from different products.” For businesses and, now, tens of millions of individual consumers, Microsoft really was the new IBM: they were safe. You bought a Windows machine not because it was the slickest or sexiest box on the block but because you knew it was going to be well-supported, knew there would be software on the shelves for it for a long time to come, knew that when you did decide to upgrade the transition would be a relatively painless one. You didn’t get that kind of security from any other platform. If Microsoft’s business practices were sometimes a little questionable, even if Windows crashed sometimes or kept on running inexplicably slower the longer you had it on your computer, well, you could live with that. Alan Boyd, an executive at Microsoft for a number of years:

Does Bill have a vision? No. Has he done it the right way? Yes. He’s done it by being conservative. I mean, Bill used to say to me that his job is to say no. That’s his job.

Which is why I can understand [that] he’s real sensitive about that. Is Bill innovative? Yes. Does he appear innovative? No. Bill personally is a lot more innovative than Microsoft ever could be, simply because his way of doing business is to do it very steadfastly and very conservatively. So that’s where there’s an internal clash in Bill: between his ability to innovate and his need to innovate. The need to innovate isn’t there because Microsoft is doing well. And innovation… you get a lot of arrows in your back. He lets things get out in the market and be tried first before he moves into them. And that’s valid. It’s like IBM.

Of course, the ethical problem with this approach to doing business was that it left no space for the little guys who actually had done the hard work of innovating the technologies which Microsoft then proceeded to co-opt. “Seeing what stuff is out there and taking it” — to use Gates’s own words against him — is a very good way indeed to make yourself hated.

During the 1990s, Windows was widely seen by the tech intelligentsia as the archetypal Microsoft product, an unimaginative, clunky amalgam of other people’s ideas. In his seminal (and frequently hilarious) 1999 essay “In the Beginning… Was the Command Line,” Neal Stephenson described operating systems in terms of vehicles. Windows 3 was a moped in this telling, “a Rube Goldberg contraption that, when bolted onto a three-speed bicycle [MS-DOS], enabled it to keep up, just barely, with Apple-cars. The users had to wear goggles and were always picking bugs out of their teeth while Apple owners sped along in hermetically sealed comfort, sneering out the windows. But the Micro-mopeds were cheap, and easy to fix compared with the Apple-cars, and their market share waxed.”

And yet if we wished to identify one Microsoft product that truly was visionary, we could do worse than boring old ramshackle Windows. Bill Gates first put his people to work on it, we should remember, before the original IBM PC and the first version of MS-DOS had even been released — so strongly did he believe even then, just as much as that more heralded visionary Steve Jobs, that the GUI was the future of computing. By the time Windows finally reached the market four years later, it had had occasion to borrow much from the Apple Macintosh, the platform with which it was doomed always to be unfavorably compared. But Windows 1 also included vital features of modern computing that the Mac did not, such as multitasking and virtual memory. No, it didn’t take a genius to realize that these must eventually make their way to personal computers; Microsoft had fine examples of them to look at from the more mature ecosystems of institutional computing, and thus could be said, once again, to have implemented and popularized but not innovated them.

Still, we should save some credit for the popularizers. Apple, building upon the work done at Xerox, perfected the concept of the GUI to such an extent in LisaOS and MacOS that one could say that all of the improvements made to it since have been mere details. But, entrenched in a business model that demanded high profit margins and proprietary hardware, they were doomed to produce luxury products rather than ubiquitous ones. This was the logical flaw at the heart of the much-discussed “1984” television advertisem*nt and much of the rhetoric that continued to surround the Macintosh in the years that followed. If you want to change the world through better computing, you have to give the people a computer they can afford. Thanks to Apple’s unwillingness or inability to do that, it was Microsoft that brought the GUI to the world in their stead — in however imperfect a form.

The rewards for doing so were almost beyond belief. Microsoft’s revenues climbed by roughly 50 percent every year in the few years after the introduction of Windows 3.0, as the company stormed past Boeing to become the biggest corporation in the Pacific Northwest. Someone who had invested $1000 in Microsoft in 1986 would have seen her investment grow to $30,000 by 1991. By the same point, over 2000 employees or former employees had become millionaires. In 1992, Bill Gates was anointed by Forbes magazine the richest person in the world, a distinction he would enjoy for the next 25 years by most reckonings. The man who had been so excited when his company grew to be bigger than Lotus in 1987 now owned a company that was larger than the next five biggest software publishers combined. And as for Lotus alone? Well, Microsoft was now over four times their size. And the Decade of Microsoft had only just begun.

In 2000, the company’s high-water point, an astonishing 97 percent of all consumer computing devices would have some sort of Microsoft software installed on them. In the vast majority of cases, of course, said software would include Microsoft Windows. There would be all sorts of grounds for concern about this kind of dominance even had it not been enjoyed by a company with such a reputation for playing rough as Microsoft. (Or would a company that didn’t play rough ever have gotten to be so dominant in the first place?) In future articles, we’ll be forced to spend a lot more time dealing with Microsoft’s various scandals and controversies, along with reactions to them that took the form of legal challenges from the American government and the European Union and the rise of an alternative ideology of software called the open-source movement.

But, as we come to the end of this particular series of articles on the early days of Windows, we really should give Bill Gates some credit as well. Had he not kept doggedly on with Windows in the face of a business-computing culture that for years wanted nothing to do with it, his company could very easily have gone the way of VisiCorp, Lotus, WordPerfect, Borland, and, one might even say, IBM and Apple for a while: a star of one era of computing that was unable to adapt to the changing times. Instead, by never wavering in his belief that the GUI was computing’s future, Gates conquered the world. That he did so while still relying on the rickety foundation of MS-DOS is, yes, kind of appalling for anyone who values clean, beautiful computer engineering. Yet it also says much about his programmers’ creativity and skill, belying any notion of Microsoft as a place bereft of such qualities. Whatever else you can say about the sometimes shaky edifices that were Windows 3 and its next few generations of successors, the fact that they worked at all was something of a minor miracle.

Most of all, we should remember the huge role that Windows played in bringing computing home once again — and, this time, permanently. The third generation of Microsoft’s GUI arrived at the perfect time, just when the technology and the culture were ready for it. Once a laughingstock, Windows became for quite some time the only face of computing many people knew — in the office and in the home. Who could have dreamed it? Perhaps only one person: a not particularly dreamy man named Bill Gates.

(Sources: the books Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson,Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and In the Beginning… Was the Command Line by Neal Stephenson; Computer Power User of October 2004; InfoWorld of May 20 1991 and January 31 1994. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

39 Comments

Posted by Jimmy Maher on August 17, 2018 in Digital Antiquaria, Interactive Fiction

Tags: ibm, microsoft, ms-dos, windows

Doing Windows, Part 8: The Outsiders

10Aug

Microsoft Windows 3.0’s conquest of the personal-computer marketplace was bad news for a huge swath of the industry. On the software side, companies like Lotus and WordPerfect, only recently so influential that it was difficult to imagine a world that didn’t include them, would never regain the clout they had enjoyed during the 1980s, and would gradually fade away entirely. On the hardware side, it was true that plenty of makers of commodity PC clones were happier to work with a Microsoft who believed a rising tide lifted all their boats than against an IBM that was continually trying to put them out of business. But what of Big Blue themselves, still the biggest hardware maker of all, who were accustomed to dictating the direction of the industry rather than being dictated to by any mere maker of software? And what, for that matter, of Apple? Both Apple and IBM found themselves in the unaccustomed position of being the outsiders in this new Windows era of computing. Each must come to terms with Microsoft’s newfound but overwhelming power, even as each remained determined not to give up the heritage of innovation that had gotten them this far.

Having chosen to declare war on Microsoft in 1988, Apple seemed to have a very difficult road indeed in front of them — and that was before Xerox unexpectedly reentered the picture. On December 14, 1989, the latter shocked everyone by filing a $150 million lawsuit of their own, accusing Apple of ripping off the user interface employed by the Xerox Star office system before Microsoft allegedly ripped the same thing off from Apple.

The many within the computer industry who had viewed the implications of Apple’s recent actions with such concern couldn’t help but see this latest development as the perfect comeuppance for their overweening position on “look and feel” and visual copyright. These people now piled on with glee. “Apple can’t have it both ways,” said John Shoch, a former Xerox PARC researcher, to the New York Times. “They can’t complain that Microsoft [Windows has] the look and feel of the Macintosh without acknowledging the Mac has the look and feel of the Star.” In his 1987 autobiography, John Sculley himself had written the awkward words that “the Mac, like the Lisa before it, was largely a conduit for technology” developed by Xerox. How exactly was it acceptable for Apple to become a conduit for Xerox’s technology but unacceptable for Microsoft to become a conduit for Apple’s? “Apple is running around persecuting Microsoft over things they borrowed from Xerox,” said one prominent Silicon Valley attorney. The Xerox lawsuit raised uncomfortable questions of the sort which Apple would have preferred not to deal with: questions about the nature of software as an evolutionary process — ideas building upon ideas — and what would happen to that process if everyone started suing everyone else every time somebody built a better mousetrap.

Still, before we join the contemporary commentators in their jubilation at seeing Apple hoisted with their own petard, we should consider the substance of this latest case in more detail. Doing so requires that we take a closer look at what Xerox had actually created back in the day, and take particularly careful note of which of those creations was named in their lawsuit.

Broadly speaking, Xerox created two different GUI environments in the course of their years of experimentation in this area. The first and most heralded of these was known as the Smalltalk environment, pioneered by the researcher Alan Kay in 1975 on a machine called the Xerox Alto, which had been designed at PARC and was built only in limited quantities, without ever being made available for sale through traditional commercial channels. This was the machine and the environment which Steve Jobs so famously saw on his pair of visits to PARC in December of 1979 — visits which directly inspired first the Apple Lisa and later the Macintosh.

The Smalltalk environment running on a Xerox Alto, a machine built at Xerox PARC in the mid-1970s but never commercially released. Many of the basic ideas of the GUI are here, but much remains to be developed and much is implemented only in a somewhat rudimentary way. For instance, while windows can overlap one another, windows that are obscured by other windows are never redrawn. In this way the PARC researchers neatly avoided one of the most notoriously difficult aspects of implementing a windowing system. When Apple programmer Bill Atkinson was part of the delegation who made that December 1979 visit to PARC, he thought he did see windows that continued to update even when partially obscured by other windows. He then proceeded to find a way to give the Lisa and Macintosh’s windowing engine this capability. Seldom has a misunderstanding had such a fortuitous result.

Xerox’s one belated attempt to parlay PARC’s work on the GUI into a real commercial product took the form of the Xerox Star, an integrated office-productivity system costing $16,500 per workstation upon its release in 1981. Neither Kay nor most of the other key minds behind the Alto and Smalltalk were involved in its development. Yet its GUI strikes modern eyes as far more refined than that of Smalltalk. Importantly, the metaphor of the desktop, and the soon-to-be ubiquitous idea of a skeuomorphic user interface built from stand-ins for real-world office equipment — a trash can, file folders, paper documents, etc. — were apparently the brainchildren of the product-focused Star team rather than the blue-sky researchers who worked at PARC during the 1970s.

The Xerox Star office system, which was released in 1981. This system looks much more familiar to our modern eyes than the Xerox Alto’s Smalltalk, sporting such GUI staples as menus, widgets, and icons. Yet it was still lacking in many areas compared to the GUIs that would follow. Windows were neither free-dragging nor overlapping, and its menus were one-shot commands, not drop-down lists. It most resembles VisiCorp’s Visi On among the GUIs we’ve looked at closely in this series of articles. Both products serve as a telling snapshot of the state of the art in GUIs just before Apple shook everything up with the Lisa and Macintosh.

The Star, which failed dismally due to its high price and Xerox’s lack of marketing acumen, is often reduced to little more than a footnote to the story of PARC, treated as a workmanlike translation of PARC’s grand ideas and technologies into a somewhat problematic product. Yet there’s actually an important philosophical difference between Smalltalk and the Star, born of the different engineering cultures that produced them. Smalltalk emphasized programming, to the point that the environment could literally be re-programmed on the fly as you used it. This was very much in keeping with the early ethos of home computing as well, when all machines booted into BASIC and an ability to program was considered key for every young person’s future — when every high school, it seemed, was instituting classes in BASIC or Pascal. The Star, on the other hand, was engineered to ensure that the non-technical office worker never needed to see a line of code; this machine conformed to the human rather than asking the human to conform to it. One might say that Smalltalk was intended to make the joy of computing — of using the computer as the ultimate anything machine — as accessible as possible, while the Star was intended to make you forget that you were using a computer at all.

While I certainly don’t wish to dismiss or minimize the visionary work down at PARC in the 1970s, I do believe that historians of early microcomputer GUIs have tended to somewhat over-emphasize the innovations of Smalltalk and the Alto while selling the Xerox Star’s influence rather short. Steve Jobs’s early visits to PARC are given much weight in the historical record, but it’s sometimes forgotten that anything Apple wished to copy from Smalltalk had to be done from memory; they had no regular access to the PARC technology after those visits. The Star, on the other hand, did ship as a commercial product some two years before the Lisa. Notably, the Star’s philosophy of hiding the “computery” aspects of computing from the user would turn out to be much more in line with the one that guided the Lisa and Macintosh than was Smalltalk’s approach of exposing its innards for all to see and modify. The Star was a closed black box, capable of running only the software provided for it by Xerox. Similarly, the Lisa couldn’t be programmed at all except by buying a second Lisa and chaining the two machines together, and even the Macintosh never had the reputation of being a hacker’s plaything in the way of the earlier, more hobbyist-oriented Apple II. The Lisa and Macintosh thus joined the Star in embracing a clear divide between coding professionals, who wrote the software, and end users, who bought it and used it to get stuff done. One could thus say that they resemble the Star much more than Smalltalk not only visually but philosophically.

Counter-intuitive though it is to the legend of the Macintosh being a direct descendant of the work Steve Jobs saw at PARC, Xerox sued Apple over the interface elements they had allegedly stolen from the Star rather than Smalltalk. In evaluating the merits of their claim today, I’m somewhat hamstrung by the fact that no working emulators of the original Star exist,[1]This has changed since this article was written; see Ian Crossfield’s comment below. forcing me to rely on screenshots, manuals, and contemporary articles about the system. Nevertheless, those sources are enough to identify an influence of the Star upon the Macintosh that’s every bit as clear-cut as that of the Macintosh upon Microsoft Windows. It strains the bounds of credibility to believe that the Mac team coincidentally developed a skeuomorphic interface using many of the very same metaphors — including the central metaphor of the desktop — without taking the example of the Star to heart. To this template they added much innovation, including such modern GUI staples as free-dragging and overlapping windows, drop-down menus, and draggable icons, along with staple mouse gestures like the hold-and-drag and the double-click. Nonetheless, the foundations of the Mac can be seen in the Star much more obviously than they can in Smalltalk. Crudely put, Apple copied the Star while adding a whole lot of original ideas to the mix, and then Microsoft copied Apple, adding somewhat fewer ideas of their own. The people rejoicing over the Xerox lawsuit, in other words, had this aspect of the story basically correct, even if they did have a tendency to confuse Smalltalk and the Star and misunderstand which of them Xerox was actually suing over.

MacOS started with the skeuomorphic desktop model of the Xerox Star and added it to such fundamental modern GUI concepts as pull-down menus, hold-and-drag, the double-click, and free-dragging, overlapping windows that update themselves even when partially occluded by others.

Of course, the Xerox lawsuit against Apple was legally suspect for all the same reasons as the Apple lawsuit against Microsoft. If anything, there were even more reasons to question the good faith of Xerox’s lawsuit than Apple’s. The source of Xerox’s sudden litigiousness was none other than Bill Lowe, the former IBM executive whose disastrous PS/2 brainchild had already made his attitude toward intellectual property all too clear. Lowe had made a soft landing at Xerox after leaving IBM, and was now telling the press about the “aggressive stand on copyright and patent issues” his new company would be taking from now on. It certainly sounded like he intended to weaponize the long string of innovations credited to Xerox PARC and the Star — using these ideas not to develop products, but to sue others who dared to do so. Lowe’s hoped-for endgame was weirdly similar to his misbegotten hopes for the PS/2’s Micro Channel Architecture: Xerox would eventually license the right to make GUIs and other products to companies like Apple and Microsoft, profiting off their innovations of the past without having to do much of anything in the here and now. This understandably struck many of the would-be licensees as a less than ideal outcome. That, at least, was something on which Apple, Microsoft, and just about everyone else in the computer industry could agree.

Apple’s legal team was left in one heck of an awkward fix. They would seemingly have to argue against Xerox’s broad interpretation of visual copyright while arguing for that same broad interpretation in their own lawsuit against Microsoft — and all in the same court in front of the same judge. Any victory against Xerox could lead to their own words being used against them to precipitate a loss against Microsoft, and vice versa.

It was therefore extremely fortunate for Apple that Judge Vaughn R. Walker struck down Xerox’s lawsuit almost before it had gotten started. At the time of their court filing, Xerox was already outside the statute of limitations for a copyright-infringement claim of the type that Apple had filed against Microsoft. They had thus been forced to make a claim of “unfair competition” instead — a claim which carried with it a much higher evidentiary standard. On March 24, 1990, Judge Walker tossed the Xerox lawsuit, saying it didn’t meet this standard and making the unhelpful observation to Xerox that it would have made a lot more sense as a copyright claim. Apple had dodged a bullet, and Bill Lowe would have to find some other way to make money for his new company.

With the Xerox sideshow thus dispensed with, Apple’s lawyers could turn their attention back to the main event, their case against Microsoft. The same Judge Walker who had decided in their favor against Xerox had taken over from Judge William Schwarzer in the other case as well. No longer needing to worry about protecting their flank from Xerox, Apple’s lawyers pushed for what they called “total concept” or “gestalt” look and feel as the metric for deciding whether Windows infringed upon MacOS. But on March 6, 1991, Judge Walker agreed with Microsoft’s contention that the case should be decided on a “function by function” basis instead. Microsoft began assembling reels of video demonstrating what they claimed to be pre-Macintosh examples of each one of the ten interface elements that were at issue in the case.

So, even as Windows 3.0 was conquering the world outside the courtroom, both sides remained entrenched in their positions inside it, and the case, already three years old, ground on and on through motion after counter-motion. “We’re going to trial,” insisted Edward B. Stead, Apple’s general counsel, but it wasn’t at all clear when that trial would take place. Part of the problem was the sheer pace of external events. As Windows 3.0 became the fastest-selling piece of commercial software the world had ever seen, the scale and scope of Apple’s grievances just kept growing to match. From the beginning, a key component of Microsoft’s strategy had been to gum up the works in court while Windows 3.0 became a fait accompli, the new standard in personal computing, too big for any court to dare attack. That strategy seemed to be working beautifully. Meanwhile Apple’s motions grew increasingly far-fetched, beginning to take on a distinct taint of desperation.

In May of 1991, for example, Apple’s lawyers surprised everyone with a new charge. Still looking for a way to expand the case beyond those aspects of Windows 2 and 3 which hadn’t existed in Windows 1, they now claimed that the 1985 agreement which had been so constantly troublesome to them in that respect was invalid. Microsoft had allegedly defrauded Apple by saying they wouldn’t make future versions of Windows any more similar to the Macintosh than the first was, and then going against their word. This new charge was a hopeful exercise at best, especially given that the agreement Apple claimed Microsoft had broken had been, if it ever existed, strictly a verbal one; absolutely no language to this effect was to be found in the text of the 1985 agreement. Microsoft’s lawyers, once they picked their jaws up off the floor, were left fairly spluttering with indignation. Attorney David T. McDonald labeled the argument “desperate” and “preposterous”: “We’re on the five-yard line, the goal is in sight, and Apple now shows up and says, ‘How about lacrosse instead of football?'” Thankfully, Judge Walker found Apple’s argument to be as ludicrous as McDonald did, thus sparing us all any more sports metaphors.

On April 14, 1992 — now more than four years on from Apple’s original court filing, in a computing climate transformed almost beyond recognition by the rise of Windows — Judge Walker ruled against Apple’s remaining contentions in devastating fashion. Much of the 1985 agreement was indeed invalid, he said, but not for the reason Apple had claimed. What Microsoft had licensed in that agreement were largely “generic ideas” that should never be susceptible to copyright protection in the first place. Apple was entitled to protect very specific visual elements of their displays, such as the actual icons they used, but they weren’t entitled to protect the notion of a screen with icons in the abstract, nor even that of icons representing specific real-world objects, such as a disk, a folder, or a trash can. Microsoft or anyone else could, in other words, make a GUI with a trash-can icon if they wished; they just couldn’t transplant Apple’s specific rendering of a trash can into their own work. Applying the notion of visual copyright any more broadly than this “would afford too much protection and yield too little competition,” said the judge. Apple’s slippery notion of look and feel, it appeared, was dead as a basis for copyright. After all the years of struggle and at least $10 million in attorney fees on both sides, Judge Walker ruled that Apple’s case was too weak to even present before a jury. “Through five years, there were many points where the case got continuously refined and focused and narrowed,” said a Microsoft spokesman. “Eventually, there was nothing left.”

Still, one can’t accuse Apple of giving up without a fight. They dragged the case out for almost three more years after this seemingly definitive defeat. When the Ninth Circuit Court of Appeals upheld Judge Walker’s judgment in 1994, Apple tried to take the case all the way to the Supreme Court. That august body announced that they would not hear it on February 21, 1995, thus finally putting an end to the whole tortuous odyssey.

The same press which had been so consumed by the case circa 1988 barely noticed its later developments. The narrative of Microsoft’s utter dominance and Apple’s weakness had become so prevalent by the early 1990s that it had become difficult to imagine any outcome other than a Microsoft victory. Yet the case’s anticlimactic ending obscured how dangerous it had once been, not only for Microsoft but for the software industry as a whole. Whatever one thinks in general of the products and business practices of the opposing sides, a victory for Apple would have been a terrible result for the personal-computer industry. The court got this one right in striking all of Apple’s claims down so thoroughly — something that can’t always be said about collisions between technology and the law. Bill Gates could walk away knowing the long struggle had struck an important blow for an ongoing culture of innovation in the software industry. Indeed, like the victory of his hero Henry Ford over a group of automotive patent trolls eighty years before, his victory would benefit his whole industry along with his company — which isn’t to say, of course, that he would have fought the war purely for the sake of altruism.

John Sculley, for his part, was gone from Apple well before the misguided lawsuit he had fostered came to its final conclusion. He was ousted by his board of directors in 1993, after it became clear that Apple would post a loss of close to $200 million for the year. Yet his departure brought no relief to the problems of dwindling market share, dwindling focus, and, most worrisome of all, a dwindling sense of identity. Apple languished, embittered about the ideas Microsoft had “stolen” from them, while Windows conquered the world. One could certainly argue that they deserved a better fate on the basis of a Macintosh GUI that still felt far slicker and more intuitive than Microsoft’s, but the reality was that their own poor decisions, just as much as Microsoft’s ruthlessness, had led them to this sorry place. The mid-1990s saw them mired in the greatest crisis of confidence of their history, licensing the precious Macintosh technology to clone makers and seriously considering breaking themselves up into two companies to appease their angriest shareholder contingents. For several years to come, there would be a real question of whether any part of the company would survive to see the new millennium. Gone were the Jobsian dreams of changing the world through better computing; Apple was reduced to living on Microsoft’s scraps. Microsoft had won in the marketplace as thoroughly as they had in court.

But the full story of Apple’s 1990s travails is one to take up at another time. Now, we should turn to IBM, to see how they coped after the MS-DOS-based Windows, rather than the OS/2-based Presentation Manager, made the world safe for the GUI.

Throughout 1990, that year of wall-to-wall hype over Windows 3.0, Microsoft persisted in dampening expectations for OS/2 in a way that struck IBM as deliberate. The agreement that MS-DOS and Windows were for low-end computers, OS/2 and the Presentation Manager for high-end ones, seemed to have been forgotten by Microsoft as soon as Bill Gates and Steve Ballmer left the Fall 1989 Comdex at which it had been announced. Gates now said that it could take OS/2 another three or four years to inherit the throne from MS-DOS, and by that time it would probably be running Windows rather than Presentation Manager anyway. Ballmer said that OS/2 was really meant to compete with high-end client/server operating systems like Unix, not with desktop operating systems like MS-DOS. They both said that “there will be a DOS 5, 6, and 7, and a Windows 4 and 5.” Meanwhile IBM was predictably incensed by Windows 3.0’s use of protected mode and the associated shattering of the 640 K barrier; that sort of thing was supposed to have been the purview of the more advanced OS/2.

Back in late 1988, Microsoft had hired a system-software architect from DEC named David Cutler to oversee the development of OS/2 2.0. No shrinking violet, he promptly threw out virtually all of the existing OS/2 code, which he pronounced a bloated mess, and started over from scratch on an operating system that would fulfill Microsoft’s original vision for OS/2, being targeted at machines with an 80386 or better processor. The scope and ambition of this project, along with the fact that Microsoft wished to keep it entirely in-house, had turned into yet one more source of tension between the two companies; it could be years still before Cutler’s OS/2 2.0 was ready. There remained little semblance of any coordinated strategy between the two companies, in public or in private.

And yet, in September of 1990, IBM and Microsoft announced a new roadmap for OS/2’s future. The two companies together would finish up one more version of the first-generation OS/2 — OS/2 1.3, which was scheduled to ship the following month — and that would be the end of that lineage. Then IBM would develop an OS/2 2.0 alone — a project they hoped to have done in a year or so — while Cutler’s team at Microsoft continued with the complete rewrite that was now to be marketed as OS/2 3.0.

The announcement, whose substance amounted to a tacit acknowledgement that the two companies simply couldn’t work together anymore on the same project, caused heated commentary in the press. It seemed a convoluted way to evolve an operating system at best, and it was happening at the same time that Microsoft seemed to be charging ahead — and with massive commercial success at that — on MS-DOS and Windows as the long-term face of personal computing in the 1990s. InfoWorld wrote of a “deepening rift” between Microsoft and IBM, characterizing the latest agreement as IBM “seizing control of OS/2’s future.” “Although in effect IBM and Microsoft will say they won’t divorce ‘for the sake of the children,'” said an inside source to the magazine, “in fact they are already separated, and seeking new relationships.” Microsoft pushed back against the “divorce” meme only in the most tepid fashion. “You may not understand our marriage,” said Steve Ballmer, “but we’re not getting divorced.” (One might note that when a couple have to start telling friends that they aren’t getting a divorce, it usually isn’t a good sign about the state of their relationship…)

Charles Petzold, writing in PC Magazine, summed up the situation created by all the mixed messaging: “The key words in operating systems are confusion, uncertainty, anxiety, and doubt. Unfortunately, the two guiding lights of this industry — IBM and Microsoft — are part of the problem rather than part of the solution.” If anything, this view of IBM as an ongoing “guiding light” was rather charitable. OS/2 was drowning in the Windows hype. “The success of Windows 3.0 has already caused OS/2 acceptance to go from dismal to cataclysmic,” wrote InfoWorld. “Analysts have now pushed back their estimates of when OS/2 will gain broad popularity to late this decade, with some predicting that the so-called next-generation operating system is all but dead.”

The final divorce of Microsoft from IBM came soon after to give the lie to all of the denials. In July of 1991, Microsoft announced that the erstwhile OS/2 3.0 was to become its own operating system, separate from both OS/2 and MS-DOS, called Windows NT. With this news, which barely made an impression in the press — it took up less than one quarter of page 87 of that week’s InfoWorld — a decade of cooperation came to an end. From now on, Microsoft and IBM would exist strictly as competitors in a marketplace where Microsoft enjoyed all the advantages. In the final divorce settlement, IBM gave up all rights to the upcoming Windows NT and agreed to pay a small royalty on all future sales of OS/2 (whatever those might amount to), while Microsoft paid a lump sum of around $30 million to be free and clear of their last obligations to the computing giant that had made them what they now were. They greeted this watershed moment with no sentimentality whatever. In a memo that leaked to the press, Bill Gates instead rejoiced that Microsoft was finally free of IBM’s “poor code, poor design, and other overhead.”

Even as the unlikely partnership’s decade of dominance was passing away, Microsoft’s decade of sole dominion was just beginning. The IBM PC and its clones had become the Wintel standard, and would require no further input from Big Blue, thank you very much. IBM’s share of the standard’s sales was already down to 17 percent, and would just keep on falling from there. “Microsoft is now driving the industry, not IBM,” wrote the newsletter Software Publishing by way of stating the obvious.

Which isn’t to say that IBM was going away. While Microsoft was celebrating their emancipation, IBM continued plodding forward with OS/2 2.0, which, like the aborted version 3.0 that was now to be known as Windows NT, ran only on an 80386 or better. They made a big deal of the work-in-progress at the Fall 1991 Comdex without managing to change the narrative around it one bit. The total bill for OS/2 was approaching an astonishing $1 billion, and they had very little to show for it. One Wall Street analyst pronounced OS/2 “the greatest disaster in IBM’s history. The reverberations will be felt throughout the decade.”

At the end of that year, IBM had to report — incredibly, for the very first time in their history — an annual loss. And it was no trivial loss either. The deficit was $2.8 billion, on revenues that had fallen 6.1 percent from the year before. The following year would be even worse, to the tune of a $5 billion loss. No company in the history of the world had ever lost this much money this quickly; by the last quarter of 1993, IBM would be losing $45 million every day. Microcomputers were continuing to replace the big mainframes and minicomputers that had once been the heart of IBM’s business. Now, though, fewer and fewer of those replacement machines were IBM personal computers; whole segments of their business were simply evaporating. The vague distrust IBM had evinced toward Microsoft for most of the 1980s now seemed amply justified, as all of their worst nightmares came true. IBM seemed old, bloated, and, worst of all, irrelevant next to the fresh-faced young Microsoft.

OS/2 2.0 started reaching consumers in May of 1992. It was a surprisingly impressive piece of work; perhaps the relationship with Microsoft had been as frustrating for IBM’s programmers as it had been for their counterparts. Certainly OS/2 2.0 was a far more sophisticated environment than Windows 3.0. Being designed to run only on 32-bit microprocessors like the 80386 and 80486, it utilized them to their maximum potential, which was much more than one could say for Windows, while also being much more stable than Microsoft’s notoriously crash-prone environment. In addition to native OS/2 software, it could run multiple MS-DOS applications at the same time with complete compatibility, and, in a new wrinkle added to the mix by IBM, could now run many Windows applications as well. IBM called it “a better DOS than DOS and a better Windows than Windows,” a claim which carried a considerable degree of truth. They pointedly cut its suggested list price of $140 to just $50 for Windows users looking to “upgrade.”

Shipping on more than twenty 3.5-inch diskettes, OS/2 2.0 was by far more the most elaborate operating system yet made for its family of personal computers. When we boot it up for the first time, we’re given a lengthy interactive tutorial of a sort that was seldom seen in software of 1992 vintage.

The notion of a “Presentation Manager” GUI that’s separate from the core OS/2 operating system has been dropped; OS/2 is now simply OS/2, with a GUI as the standard, built-in interface. From the opening tutorial to the look of its desktop, the whole package reminds one of nothing of so much as the much later Windows 95. We have a full-fledged, functioning desktop workspace here, with icons representing folders and disks, and a “shredder” to replace the usual trash can.

After shipping earlier versions of OS/2 with no extra tools or applets whatsoever, IBM got wise this time around and included plenty of stuff to play with, like this neat little music editor.

Some aspects of the interface are a little strange. Dragging with the mouse is accomplished using the right button rather than the left — a fine example of OS/2’s superficial similarity and granular dissimilarity to Windows, which so many users who had to move back and forth between the environments found so frustrating.

Of course, MS-DOS is still around if you need it. Unlike in OS/2 1.x, here you can have as many MS-DOS windows and applications open as you like.

But, despite its many merits, OS/2 2.0 was a lost cause from the start, at least if one’s standard for success was Windows. Windows 3.1 rolled out of Microsoft at almost the same instant, and no amount of comparisons in techie magazines pointing out the alternative operating system’s superiority could have any impact on a mass market that was now thoroughly conditioned to accept Windows as the standard. Giant IBM’s operating system had become, as the New York Times put it, “an unlikely underdog.”

In truth, the contest was so lopsided by this point as to be laughable. Microsoft, who had long-established relationships with the erstwhile clone makers — now known as makers of hardware conforming to the Wintel standard — understood early, as IBM did only much too late, that the best and perhaps only way to get your system software widely accepted was to sell it pre-installed on the computers that ran it. Thus, by the time OS/2 2.0 shipped, Windows already came pre-installed on nine out of ten personal computers on the market, thanks to a smart and well-funded “original equipment manufacturer” sales team that was overseen personally by Steve Ballmer. And thus, simply by buying a new computer, one automatically became a Windows user. Running OS/2, on the other hand, required that the purchaser of one of these machines decide to go out and buy an alternative to the perfectly good Microsoft software already on her hard drive, and then go through all the trouble of installing and configuring it. Very few people had the requisite combination of motivation and technical skill for an exercise like that.

As a final indignity, IBM themselves had to bow to customer demand and offer MS-DOS and Windows as an optional alternative to OS/2 on their own machines. People wanted the system software that they used at the office, that their friends had, that could run all of the products on the shelves of their local computer store with 100-percent fidelity (with the exception of that oddball Mac stuff off in the corner, of course). Only the gearheads were going to buy OS/2 because it was a 32-bit instead of a 16-bit operating system or because it offered preemptive instead of cooperative multitasking, and they were a tiny slice of an exploding mass market in personal computing.

That said, OS/2 did have a better fate than many another alternative operating system during this period of Windows, Windows everywhere. It stayed around for years even in the face of that juggernaut, going through two more major revisions and many minor ones, the very last coming as late as December of 2001. It remained always a well-respected operating system that just couldn’t break through Microsoft’s choke hold on mainstream computing, having to content itself with certain niches — powering automatic teller machines was a big one for a long time — where its stability and robustness served it well.

So, IBM, and Apple as well, had indeed become the outsiders of personal computing. They would retain that dubious status for the balance of the decade of the 1990s, offering alternatives to the monoculture of Windows computing that appealed only to the tech-obsessed, the idealistic, or the just plain contrarian. Even as much of what I’ve related in this article was taking place, they were being forced into one another’s arms for the sake of sheer survival. But the story of that second unlikely IBM partnership — an awkward marriage of two corporate cultures even more dissimilar than those of Microsoft and IBM — must, like so much else, be told at another time. All that’s left to tell in this series is the story of how Windows, with the last of its great rivals bested, finished the job of conquering the world.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Week of September 24 1990 and January 15 1991; InfoWorld of September 17 1990, May 29 1991, July 29 1991, October 28 1991, and September 6 1993; New York Times of December 29 1989, March 24 1990, March 7 1991, May 24 1991, January 18 1992, August 8 1992, January 20 1993, April 19 1993, and June 2 1993; Seattle Times of June 2 1993. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 This has changed since this article was written; see Ian Crossfield’s comment below.

34 Comments

Posted by Jimmy Maher on August 10, 2018 in Digital Antiquaria, Interactive Fiction

Tags: apple, macintosh, microsoft, ms-dos, os/2, windows

← Older Posts

Newer Posts →

The Digital Antiquarian (2024)
Top Articles
Craigslist Com Maine
Elite Curbing Machines for Every Project
Skigebiet Portillo - Skiurlaub - Skifahren - Testberichte
Woodward Avenue (M-1) - Automotive Heritage Trail - National Scenic Byway Foundation
Dricxzyoki
Readyset Ochsner.org
Fort Carson Cif Phone Number
St Als Elm Clinic
Lycoming County Docket Sheets
Comenity Credit Card Guide 2024: Things To Know And Alternatives
Capitulo 2B Answers Page 40
83600 Block Of 11Th Street East Palmdale Ca
What Is Njvpdi
Synq3 Reviews
Craigslist Pets Longview Tx
Funny Marco Birth Chart
Missing 2023 Showtimes Near Landmark Cinemas Peoria
iOS 18 Hadir, Tapi Mana Fitur AI Apple?
Haunted Mansion Showtimes Near Millstone 14
Billionaire Ken Griffin Doesn’t Like His Portrayal In GameStop Movie ‘Dumb Money,’ So He’s Throwing A Tantrum: Report
Ruben van Bommel: diepgang en doelgerichtheid als wapens, maar (nog) te weinig rendement
Dark Entreaty Ffxiv
Cb2 South Coast Plaza
Amelia Chase Bank Murder
They Cloned Tyrone Showtimes Near Showbiz Cinemas - Kingwood
Pokemon Inflamed Red Cheats
In hunt for cartel hitmen, Texas Ranger's biggest obstacle may be the border itself (2024)
Trust/Family Bank Contingency Plan
Devotion Showtimes Near The Grand 16 - Pier Park
140000 Kilometers To Miles
Vip Lounge Odu
Skip The Games Ventura
Ludvigsen Mortuary Fremont Nebraska
Woodman's Carpentersville Gas Price
The Best Restaurants in Dublin - The MICHELIN Guide
Stanley Steemer Johnson City Tn
Achieving and Maintaining 10% Body Fat
Miami Vice turns 40: A look back at the iconic series
Tunica Inmate Roster Release
Santa Clara County prepares for possible ‘tripledemic,’ with mask mandates for health care settings next month
Payrollservers.us Webclock
Sofia Franklyn Leaks
Academic Notice and Subject to Dismissal
Sea Guini Dress Code
26 Best & Fun Things to Do in Saginaw (MI)
Craigslist Marshfield Mo
Strange World Showtimes Near Atlas Cinemas Great Lakes Stadium 16
2487872771
Deviantart Rwby
Convert Celsius to Kelvin
The Missile Is Eepy Origin
Dcuo Wiki
Latest Posts
Article information

Author: Annamae Dooley

Last Updated:

Views: 5689

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Annamae Dooley

Birthday: 2001-07-26

Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

Phone: +9316045904039

Job: Future Coordinator

Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.