A Frankenstein Writing Clone

One has to admire the genius of plot, a strong concept or idea put into motion, resulting in a book that moves–a true page-turner. If you’ve read or watched Ready Player One, you can relate. Child of the 80s? You really understand. Atari. The DeLorean DMC-12. In the book, Joust plays an integral part in one of the quests. Is the tale perfect? No. But does the perfect tome exist? And so, I was glad when Ready Player Two was released in the Kindle store. Outside a few bad reviews focused on specific character traits than the actual tale, a sad side-story that exists in today’s world, I enjoyed the book. From a writing standpoint, there is more telling than showing (a common problem for me too), but the plot is fluid. You read these books fast, enjoying the ride. I missed the creativity of the last volume, and rarely does this sequel rise above the original–a curse for most efforts. There are a few cases in the wild these days. Percy Jackson Number 4 is the hallmark of the series. Shrek Two for Bonnie Tyler’s beloved song alone. And all installments of Harry Potter crescendoed, each better than the last. When a story is vivid, forge onward brave reader. Yes, a great reason exists for one’s proposal to death.

At the end of the last Ready Player Movie, the protagonist, Wade Watts, asks James Holliday’s avatar, “Are you alive?” And the hokey digital inventor replies, “No. Jim is dead?” Then, Wade counters, “Are you a machine.” And the avatar smiles, shakes his head with a somber nod that took a moment but lasted hours, and disappears into the void.

I won’t spoil the sequel’s plot, but the concept raises many questions in today’s world. Can a machine AI imitate human intelligence? For the tech geeks, this is known as the Turing Test. While at IBM in the 90s, the company raced to best Gary Kasparov in chess, finally beating him on the second attempt amidst controversy. Did the tech behemoth cheat? Grant, chess is a defined world, each piece constrained by a certain set of moves with a finite number of options. Feats of yesterday are no longer challenging. Today, instead of servers, the phone in your pocket can out algorithm the average player. Progress abound, and the extraordinary becomes nothing with time and distance.

But is this intelligence?

IBM moved on to a more demanding test to best Jeopardy champions. Understanding and interpreting language, including pop-culture, proves far more challenging than calculating options and moves. At the end of the second day of competition, Ken Jennings, with a streak of 74 consecutive wins under his belt, bowed, “To his machine overlords.” While working at IBM, I can say the company proudly destroyed Alex Trebek’s champions. However, did the machine win? The folks creating the system built a probability index that worked faster and exploited a weakness in the game’s design. Jeopardy has an interesting quirk where you can’t buzz in until the question is asked, which is determined by diminishing lights on the dashboard. Chime in too early? The player is locked out. Too late? Someone else controls the board. Without postulating through the specifics of how corneas operate and our own perception of reality, there is a slight delay when the human mind notices the outside world. Heady? Yes. Take out the physics, the machine, known as Watson, possessed a time advantage and won the buzzer battle throughout the two-day event. It would be interesting to adjust the parameters, making the machine slower to compensate for the delay to see if Jennings and crew performed better. Who knows, the outcome might be the same?

Note, the machine didn’t exploit this advantage. No accident, a team of creative and smart folks took advantage of the system. The rule breakers often win.

Can AI display creativity?

Recently, I decided to create a clone of my writing self. An incredible J. Scott avatar, not as advanced as James Halliday. Despite popular opinion, the internet isn’t the real world. Thank you, God. However, now and then, fictional bits and bytes push or slam into a cold reality. Infinite examples exist: January 6th. Left-wing Russian conspiracy. Pizza gate. Despite numerous catastrophic events, the web is a glorious place where folks share work if you tinker and know where to find reasoned reflection. To create a better version of my writing self, I evaluated a flurry of number-crunching systems to model a Frankenstein monster that Hemingway might envy. In my analysis, I compared Torch RNN (open source invented by Facebook), TensorFlow (an open source Google jam), and GPT-2 (a large text-based model driven by an Elon backed company called Open AI). I ended up choosing Torch for simplicity.

The concept of this old tech is grand. Often, more accessible than using the latest and greatest. And the bugs have been worked out, theoretically. But there is the issue of technical debt. For those unfamiliar with the term, this means someone has to maintain old software for OS updates, security challenges, and keeping performance within an acceptable range. If the world has moved on to GP-729 or Tensorflow, is anyone maintaining the old builds? This is a common problem with open-source projects. Eventually, the excitement wains.

As I worked through the installation instructions from Jeff Johnson’s handy guide, I learned a few lessons while stumbling around in the dark (feel free to skip ahead if you care little about the details):

  • While installing Torch, there is a Python dependency, well, because that’s the language of choice here. However, Torch RNN doesn’t work with the latest and greatest. So, when using Homebrew, the popular MAC software deployment tool, you don’t have to specify the release required or leverage Anaconda or other virtualization software to manage the builds. As I was coding on a MAC, the OS maintains an older Python build, which matches the prerequisites. I lucked out. No virtualization required, grant I kicked my technical debt down the road here if Apple chooses to drop the dependency in a future version. Most likely, this will happen but all fun and games until then.
  • HDF5 has moved on, but Torch RNN has remained stagnant. I suppose I could have adjusted the build; however, I took the easy way out and downloaded the previous version. Note, it is best to do this upfront vs installing the current, realizing nothing works, and then going back to make the changes. But hey, why not jump in with both feet? Breaking the build is part of the fun. Note, I almost gave up here after failing for hours. But then I had the Eureka moment where the model started to build.
  • How do you test your model? I can say I probably tried hundreds of parameters. More RNNs. Varied sequence lengths. Changed batch sizes, larger corrects overfitting issues …. occasionally. Learned the meaning of NUM layers. Adjusted gradient clipping. Now, whatever you do, prepare yourself that a final working model can take eons to build. I had attempts take a solid week to complete. Grant, Apple abandoned GPUs models ago, so slowness abound (I’d love to see if anyone has played with the new M-1 chip).
  • For time and performance reasons, I often ran comparisons using small control groups. Yeah, the thing kids learn in fifth-grade science. In essence, I kept the model the same. Adjusted only one function (Num Layer, Batch size, RNN size, etc.,) in proper scientific method styled. Observed the learning rate for so many epochs. Wrote the results. Repeat. And then ran the longer model with the winner–this maddening process sort of worked. At times, I was disappointed with highly tuned settings I felt sure would become a winner, only to see them bow out in defeat after a two day run. Yes, sometimes statistical modeling doesn’t pan out when introduced to the real world. Have we heard this somewhere before?
  • More text is better. This is why there are so many Shakespeare bots; one reason is that people don’t understand the language nuance these days (computer mistakes in Old English aren’t noticeable), and the other is that the bard produced a large body of work. At first, I tried building a model using my work. I have a few hundred posts, five-plus novels under my belt (some still in progress); yet, I didn’t have enough work to build a functional J. Scott clone.
  • To expand my data model, I added other far superior authors from yesterday. During this effort, I compiled a list of greats to add to my own body of work. If you include Twain, Joyce, Doyle, and Bronte, and mix in a host of science fiction from a trove of magazines, what do you get? Well, one massive text document that chokes most editing applications. Mac standard text editor? Slow. Atom? Never loaded. I ended up using BBEdit, which allowed me to cut line breaks, build consistent formatting, and clean the document for modeling. For me, this software proved to be the best all-around text application in the market.

Results

After going through the painful installation and weeks of testing iteration, I learned using AI has some merit and can mimic language reasonably well. Sort of … That being said, the success ratio is less than ideal. I could make multiple calls to Torch using a handy shim on Atom developed by Robin Sloan of Sour Dough fame, but few and far between proved usable. Mostly, I used my Marry Shelly creation to find an idea, move in another direction, or just to play if I became stuck or bored.

Sometimes, the machine brings a valuable insight to your writing but …

And yes, some of this post was written by my clone, but the trial and error took work. Often, context is missing, clunky in spots, but a genius sentence emerged after more than a few rewrites. Is this a useful tool for writers? Being candid, I don’t know. The copyright issues, clunkiness of prose, immense computational time (a more efficient means has to exist), and lack of continuity highlight the ongoing challenge. But with GPT-3 and future releases from the community, the space will continue to evolve. One day, my clone will live on-leaving this old man to the dust.

Maybe. Until then, I must suffer with fury like a madman at the keyboard, moon aglow, while a better method one day emerges. Onward.

Great Books on Gutenberg I Used to Build Frankenstein:

  • Full Works of Sherlock Holmes
  • Complete Mark Twain
  • Moby Dick
  • Pride and Prejudice
  • Heart of Darkness
  • Treasure Island
  • Ulysses
  • Wuthering Heights
  • The Call of the Wild
  • Alice in Wonderland
  • The Turn of the Screw
  • The Brothers Karamazov
  • Gulliver’s Travels
  • H.G. Wells Compilation
  • The Full Works of Poe
  • King Solomon’s Mines

Other Notes:


  • I leveraged Robin Sloan‘s Atom Add-in to call Torch while writing this essay. Made the experience fun. And yes, there are some odd lines created in this post, but why not propose to death? Loved the addition, despite it being out of place.
  • Gwern provided a deep dive to the extremes on using Torch. I would have never considered turning up the RNN’s to a 1,000 plus, which created some of the better results, and feel myself wanting compared to the work done in this space.
  • In looking back to Kasparov’s play against Big Blue, I found his strategy fascinating, almost unsure how best to approach the machine.