Truthful Digital: Navigating the Thin Line Between Human and AI-generated
Listen to this article:
We currently find ourselves engulfed in the buzz and fervour of generative AI, a phase where fascination with the technology's potential has heightened to an intoxicating degree. To illustrate the extent of this phenomenon, let's take MidJourney as an example. This platform is being utilised to create whimsical scenarios, like world leaders harmoniously building sandcastles together – a testament to the surreal and imaginative boundaries AI can push.
Even traditionally sacred institutions aren't immune to this AI revolution. The Pope, an enduring symbol of religious orthodoxy, has been transformed into a digital fashion icon. Meanwhile, the digital landscape witnesses a new trend as a plethora of social media influencers suddenly profess AI expertise, further blurring the lines between human and AI-driven creativity.
Adding to the significance of this new era, the head of Tesla AI has anointed English as the most pivotal programming language of the decade. This endorsement underpins the role of linguistics in AI, underscoring the synergy between human communication and machine learning.
But as we absorb these intriguing developments, a pertinent question looms large: what implications do these advancements hold for the construction of digital products? More importantly, as the world possibly drifts towards an era of AI-generated "truths", do we need a dedicated system to ensure authenticity in digital design?
It is clear that as AI becomes increasingly sophisticated, the line between human and machine-generated content becomes thinner. In order to maintain trust and reliability in our digital experiences, we need to consider implementing design systems that clearly delineate between human and AI-created content.
A "truthful design system" wouldn't just highlight the provenance of content but also serve as a seal of transparency and authenticity in a world where 'truth' can be a slippery concept. This system could become an essential tool in the realm of AI, enabling users to discern fact from fiction and navigate the digital world with confidence and clarity. By incorporating this level of transparency into our digital products, we empower users to make informed decisions about the content they consume and interact with.
A few weeks ago, I employed ChatGPT to generate a chunk of developer code. The outcome was rather impressive. The ChatGPT user interface (UI) presented the generated code within a distinguishable black box, utilising a more 'coder-friendly' font for clarity. The thoughtful addition of a 'copy to clipboard' button made it effortless for me to utilise the generated code. After a few minor adjustments, which were merely personal preferences, the code was ready for deployment and functioned flawlessly.
Later the same day, I decided to use ChatGPT once more, this time to generate code designed to hash data - transforming a piece of text from "hello my name is Lee" into a seemingly cryptic hash: 9203aa7ea256c0bdc51a7f365874e681. Once again, the code worked perfectly, demonstrating the efficacy of the AI model.
However, it was during this second round that I encountered an intriguing aspect. Within the same developer-styled box, ChatGPT had provided a sample hashed output for the phrase "Hello world", associated with a hash of 6583aa766e.... Seeing this, I asked ChatGPT to hash "hello my name is lee" and it quickly produced a response in the same coder-focused UI design style. Without much thought, I copied the hash, assuming it to be accurate. To my surprise, the hash wasn't real - it was a prediction made by ChatGPT. This was a pivotal moment for me to realise that ChatGPT does not execute the generated code but, rather, predicts what the output could be.
The gravity of such outcomes is highly dependent on context. For example, a language model inventing a name for a character in a children's story may seem harmless. However, consider the same model generating an article that falsely claims to have a cure for cancer. The implications can range from benign to severe.
It is these nuances that demonstrate the complex and fascinating capabilities of AI. They also highlight the importance of recognising that while AI can provide us with impressive tools and solutions, we should approach the outputs with a critical eye, particularly in sensitive areas where the information's authenticity and accuracy are of utmost importance.
Building upon these experiences and understanding the potential ambiguities that AI-generated content can present, I propose the need to create a design framework or protocol specific to Language Learning Models (LLMs). Initially, I suggest incorporating two key concepts:
The 'Red Box' concept is straightforward. Any content generated by an LLM should be presented within a clearly designated 'Red Box'. This serves to inform the reader or consumer that the content they are viewing has been AI-generated.
If the content is being transmitted via an API, or is in a different media format such as image or video, then it should be encapsulated in a <genai/> tag. This ensures an immediate and clear understanding of the content's origin.
The 'Origin Story Box' adds another layer of transparency. Just as every compelling superhero or villain has an engrossing origin story, AI-generated content is no different.
"With great power comes great responsibility",
Uncle Ben wisely stated in Spider-Man
AI, undeniably a formidable power in today's world, requires an accountable handling of its outputs. The 'Origin Story Box' would be appended to every piece of AI-generated content, providing key information about its inception. This might be placed at the end of written content or embedded within the metadata of a data file.
For instance, this could look something like:
Title: Authentic Design in an AI World
Hash: 45442hhhbfdsgjhfs43
Versions:
+Human authored
+AI edited (ChatGPT-4)
+AI translated(en-fr)
+Human reviewed
The 'Title' indicates the name of the content; the 'Hash' provides a one-way hash of the content, enabling its authenticity to be verified; and 'Versions' describe the content's creation process. This could include whether the content was AI-generated, whether a human wrote the initial draft with AI ironing out the errors, or whether there was a source document and AI was used for the translation.
The 'Red Box' and 'Origin Story Box' concepts form the foundation of a new protocol for dealing with AI-generated content. By providing visual cues and informative metadata, they help maintain the trustworthiness of our digital products, enabling users to engage confidently and knowledgeably with the AI-dominated landscape.
Title: Truthful Digital: Navigating the Thin Line Between Human and AI-generated
Hash: 13F86D9C4E2B37400E01D82568F6235613260A33874ACDE053B5D97D3C8AD5B9 (sha256)
Versions:
+Human authored
+AI expanded (ChatGPT-4)
+Human reviewed
If your an organisation thinking 'how do we embrace generative AI safely?' get in touch with us to try our chatGPT solution for enterprises. email: [email protected]