This past weekend, I listened to a Reset podcast episode about AI being applied to writing.
There were two parts to the episode.
The first part dealt with schools using machine learning to grade kids’ essays, and how one parent attempted to figure out how the system was grading his child’s work after their homework kept receiving low marks.
I hope that my girls will be open to me helping them learn how to write better. Having read a lot of papers and worked within a lot of organizations, it’s wild to me how ineffectively most people write, which can have a massive negative effect on their outcomes.
An ML engineer interviewed on the podcast episode roughly said that the results from utilizing ML for grading papers are highly dependent on how ML was implemented. This is basically saying, “it works on my machine”.
This seems so irresponsible in pretty much any context; what you want are additive tools that are foolproof and which are guaranteed to produce positive outcomes, especially in schools where teachers need tried-and-true tools. If you can’t guarantee a positive outcome, why would you codify its use at scale?
This is like the attempt to jam computers into schools, without training or applications or even a valid use case to make use of them properly. Again, this produces few observable positive outcomes, but takes a bite out of a district’s bottom line.
The second part of the episode was an interview with Sigal Samuel (who also wrote about it) about using GPT-2 to bounce creative writing ideas off of for inspiration.
This seems to me to be a far more promising use case for ML than what pop culture envisions, which is handing off cognitive overhead of very large systems to AI.
I love the idea that you could have a companion with you, listening to what you’re saying, and giving you flashes of ideas or inspirations or even just keywords that light up new paths of thinking — in many ways this could be therapeutic (or depressing, if you follow the movie Her), it could unleash a lot of creativity (particularly for writers), but perhaps more skeptically, it could lead to suggestability.
That becomes less of a positive trait (help me navigate my ideas into new lands) and more of something that can be exploited (Your Own Littlefinger (TM)). I think Black Mirror sort of touched upon this in its most recent season episode featuring Miley Cyrus, whose digital likeness embodied in a toy figure changes behaviors of the children who own it.
We used to live in an internet time when people didn’t really consider the potential for undermining societies and governments with new technology, as we do now. Digital utopianism aged poorly. Now, at least, there are already efforts to explore the negative impacts of GPT-2, according to this OpenAI blog post:
- Cornell University is studying human susceptibility to digital disinformation generated by language models.
- The Middlebury Institute of International Studies Center on Terrorism, Extremism, and Counterterrorism (CTEC) is exploring how GPT-2 could be misused by terrorists and extremists online.
- The University of Oregon is developing a series of “bias probes” to analyze bias within GPT-2.
Still, instead of having a Her/Alexa/Siri reporting home with all my queries, it would be nice to have a cyberpunkish augment that I could maintain myself, tweak as I like, and have it help me make faster, more creative decisions. Having a dialogue with this kind of augment would give me what I always seem to come back to in life and in my work, a blend of human and machine.