How to Use AI Ethically in Content Creation: A Hitchhiker’s Guide

Alien female in space holding scales of justice
Image created using CoPilot Designer, powered by DALL-E 3

If you’re concerned about how to use AI ethically in content creation, get your babel fish, grab your towel and enlist a little Snow Lion.

  1. Introduction
  2. Using generative AI responsibly
  3. Examples of misuse
    1. Hallucinations
    2. Deepfakes: The Double-Edged Sword of AI
    3. Plagiarism
    4. Prompt Hacking
  4. Summary of key ethical issues
  5. Tips and tricks to use generative AI ethically
    1. Copyright Infringement & Intellectual Property Misuse
    2. Data privacy / Bias and Discrimination
    3. Misinformation and Fake News
  6. A three-pillar framework
  7. Postscript: Ethical Considerations in 2025
  8. Conclusion
  9. References

Introduction

In the ever-expanding universe of content creation, artificial intelligence (AI) has become a powerful ally. But with great power comes great responsibility.


I slept well last night; didn’t get up once. You may say this is the way things should be, and I’d agree. But it’s not so usual for me.

I think it’s because I sleep better when my spirit feels lifted, as it has since this morning. I swam in the sea (always good for the soul) and continued my reading of The Dalai Lama’s Cat, where Mousie-Tung is enlightening me… sorry!… Rinpoche, “precious one”, little Snow Lion… is letting me in on the ways of the monks.

Balinese cat sleeping on a pile of books in space
Image created using CoPilot Designer, powered by DALL-E 3

Bear with me. Our topic today is the ethical use of generative AI during intergalactic travel. Now in the company of Rinpoche as our ethical guide, let’s get into it.

Using generative AI responsibly

Generative AI can be a powerful tool for content creation—helping you generate ideas, saving time and enhancing creativity. But it can also pose some ethical challenges, as there is the potential for it to create content that is misleading, inaccurate, harmful, or even illegal.

How can you use generative AI responsibly and avoid interstellar mishaps? In this blog post, I’ll share some examples of the misuse of generative AI, some tips and tricks to use it ethically, and some resources to learn more.

Examples of misuse

Before we get into the ethics of generative AI, let’s look at some examples of how things can go wrong. Readers, my first example stems straight from researching content for this blog. The examples I initially unearthed, each with a reference (e.g. [Story] – Source: The New York Times), did not seem entirely authentic, so I checked and asked for citations. This is the reply:

Copilot

“I’m sorry, but I couldn’t find any credible sources to verify the specific incidents mentioned in the text you provided. It’s important to note that while the scenarios described are plausible and reflect real concerns about the misuse of generative AI, without verifiable sources, they should be treated as illustrative examples rather than factual occurrences.”

Fear not!

I eventually found what I was looking for…The scenarios listed below show the risks and harms of generative AI and why we need to be careful.

Hallucinations

Definition: When AI generates information or output that is nonsensical or inaccurate because it sees patterns or objects that don’t exist or are invisible to humans—like seeing shapes in the clouds (Source: IBM).

In Ronnie Sheer’s LinkedIn course Prompt Engineering with ChatGPT, he states: “Large language models (LLMs) such as ChatGPT are very impressive. They can, however, be inaccurate and sometimes even make up things. It’s often said that models such as ChatGPT are 90 or 97% accurate, but 100% confident”.

Digital image of a face in the clouds with psychedelic swirls
Image created using CoPilot Designer, powered by DALL-E 3

A Lawsuit

In 2023 in Mata v Avianca, lawyers filed a brief with a New York court that included fake quotes and case citations generated by an AI tool. They didn’t know this could happen, nor did they check the cases they cited. The court threw out their client’s case, sanctioned the lawyers for bad faith, and fined them and their firm—exposing their actions to public scrutiny (Source: New York Times).

In politics, US president Donald Trump’s onetime personal lawyer, Michael Cohen, says he unwittingly passed along to his attorney false AI-generated legal case citations he found online before they were submitted to a judge (Source: UNSW).

“At the very least, technology competence should become a requirement of lawyers’ continuing legal education in Australia”.

Michael Legg & Vicki McNamara

Deepfakes: The Double-Edged Sword of AI

Definition

A deepfake is a synthetic media where AI is used to manipulate or generate visual and audio content, creating results that can be highly deceptive (Source: Mirriam-Webster).

The Rise of Celebrity Deepfakes

Celebrities like Taylor Swift have become targets, with their deepfakes appearing on social media, demonstrating the technology’s potential for misuse (Source: The New York Times).

Deepfakes for Good

Conversely, deepfakes can serve positive purposes. For instance, David Beckham’s participation in a malaria awareness campaign used deepfakes to portray him speaking nine languages, amplifying the campaign’s reach (Source: Britannica).

The Liar’s Dividend

This era of misinformation has given rise to the “liar’s dividend,” where individuals exploit the climate of doubt to dismiss truths as deepfakes (Source: Psychology Today).

Global Actions Against Misinformation

Efforts to combat deepfakes are underway, with global summits focusing on establishing standards to counteract misinformation and the misuse of AI (see Global AI summit tackles misinformation and deepfakes with a little ‘bot’ of help).

The Role of Verification

It’s crucial for individuals and organisations to verify the authenticity of information before sharing it, and use (or check that your organisation uses) advanced tools and techniques to detect deepfakes.

Plagiarism

Stanford University AI Plagiarism Scandal

In May 2024, three Stanford University authors released a language model (called Llama3V) which was accused of copying another AI model “MiniCPM-Llama3-V 2.5” from a Chinese startup. A GitHub user discovered the similarities and two of the authors apologised (Source: Plagiarism Today).

AI detectors are becoming more sophisticated in tackling these issues. They’re designed to detect AI-generated content and compare texts against massive databases to find plagiarism. They’re part of a suite of tools to ensure originality of content.

For example, Scribbr’s AI Detector can detect content generated by popular AI tools like ChatGPT, Gemini and Copilot, and is used alongside plagiarism checkers to verify text.

Prompt Hacking

In the LinkedIn Learning course Mitigating Prompt Injection and Prompt Hacking, Ray Villalo explains how a hacker could tell ChatGPT to act as a writer. The author would ask how a character would go about doing something illegal (pretending that the description is intended to be purely fictional and used for creative writing) … a fake guardrail.

Similarly, when companies adopt LLMs into their platforms, prompt hackers can use malicious prompts to gain access to sensitive or confidential information and leak internal data from a company’s resources.

Villalo comments, “Part of implementing an AI security plan should include a thoughtful approach to dealing with prompt hacking”.

Summary of key ethical issues

1. Copyright Infringement & Intellectual Property Misuse: AI-generated content can mirror copyrighted works, leading to legal issues. This necessitates originality and respect for intellectual property. There’s also a concern about the originality of AI-generated content, making it useful to clarify ownership and creative rights in AI contributions.

2. Data Privacy: Generative AI’s reliance on vast datasets raises concerns over the potential mishandling of personal information. This can be mitigated via strict data handling and consent protocols.

3. Bias and Discrimination: AI systems can inherit biases from training data. Therefore, it’s imperative to regularly audit and adjust AI for fairness and inclusivity.

4. Misinformation and Fake News: Given AI’s capacity to fabricate convincing but false content, companies and individuals will benefit from strengthening verification processes and promoting digital and media literacy.

Tips and tricks to use generative AI ethically

So, how can you use generative AI ethically in content creation? Here are some tips and tricks to help you out. They are based on the principles of honesty, accuracy, transparency and respect, which are essential for ethical communication and journalism.

They are also inspired by the wisdom and humour of Douglas Adams, who taught us how to cope with the unexpected and the absurd in the universe.

Check (and re-check) your sources: Don’t panic… but do pay attention. Generative AI can produce amazing and surprising content, but it can also produce nonsense and errors. Don’t blindly trust or reject the AI output but examine it carefully and critically. Check the facts, the sources, the logic and the language. Use your common sense and your knowledge. If something seems too good or too bad to be true, it probably is.

Be transparent: Don’t forget your towel… but do cite your sources. A towel is a useful item to have in the galaxy, as it can serve many purposes and signal that you are a hitchhiker (see the end of this blog for a handy list!). Similarly, citing your sources is a useful practice to have in content creation, as it can serve many purposes and signal that you are an ethical writer.

Citing your sources can help you avoid plagiarism, support your claims, acknowledge your influences and guide your readers. If you use generative AI to create or enhance your content, you should always disclose it and cite the AI tool or model you used, as well as the original sources or data it used.

Writer at keyboard, towel around shoulders and Balinese cat
Image created using CoPilot Designer, powered by DALL-E 3

Data privacy / Bias and Discrimination

Check your organisation’s protocols: If you are using generative AI to support productivity, make sure there is clarification for this practice within your organisation. Check the regulations, constraints, licencing levels and parameters. Promote the benefits and possibilities of using AI within an ethical framework.

Respect your audience: Don’t rely on the babel fish but do respect your audience. In The Hitchhiker’s Guide to the Galaxy, the babel fish (check out this clip from the 2005 film) is a small creature that can translate any language instantly and perfectly, as long as you stick it in your ear. It sounds like a convenient and helpful device, but it can also cause trouble and misunderstanding by forgetting the cultural and contextual differences between languages and speakers.

Similarly, generative AI can disregard the cultural and contextual differences between languages and audiences. Always respect your audience’s needs, preferences and values—as well as their privacy and consent.

Babel Fish from The Hitchhiker's Guide to the Galaxy
Source: BBC via YouTube

Avoid bias: When referencing sources, consider their transparency of funding, staff credentials, rigour of research methods, and balance in reporting.

Companies and individuals can strive to train AI models on diverse and representative datasets to minimise the risk of inheriting biases. Regularly evaluate AI output for bias, adjust the model for fairness and inclusivity, and use processes that prevent AI from using biased information.

Avoid plagiarism: Use plagiarism detection tools to verify the uniqueness of content before publication.

Use data controls: Configure your environment for optimal privacy. For example, in ChatGPT, open the menu and navigate to Settings, then head to Data Controls and toggle off Chat History and Training. Save your content on a Notepad or application instead.

Misinformation and Fake News

Use verification processes: A positive first step we can take is to amp up our media and digital literacy. Always verify sources, and when we use AI to create, first check in with your companion Snow Lion. Is your content original? Does it add value? Is it making the world a better place?

Channel AI to augment your creativity: Be creative and original, and know where your towel is (i.e., use your resources, including generative AI… Emulate a “hoopy frood” [really amazing altogether guy] in The Hitchhiker’s Guide to the Galaxy).

However, beware! Generative AI will help us create cool and confident content, but it can also make us smug and arrogant. Use generative AI as a tool to enhance your creativity, not as a crutch to replace it. Remain humble and open-minded, and learn from generative AI, rather than copy from it.


By implementing these strategies, content creators can ethically leverage generative AI while ensuring their work remains original, unbiased and valuable to their audience.

A three-pillar framework

“Any new technology is only as ethical as the underlying data that it’s trained on. For example, if the majority of our consumers to date have been of a particular race or gender when we train the AI on that data, we’ll continue to only design products and services that serve the needs of that population”.

Vilas Dhar

In the LinkedIn Learning course Ethics in the Age of Generative AI, Vilas Dhar presents an ethical AI framework with three pillars:

Responsible data practices: Employ human oversight. Always review and refine AI-generated content to ensure it aligns with ethical guidelines and maintains the creator’s unique voice.

Dhar poses four questions: What is the source of the training data of the LLM you are using? What has been done to reduce bias in the data? How might the data we’re using perpetuate historic bias? What opportunities exist to prevent biased decision-making?

Well-defined boundaries on safe and appropriate use: Define your target audience’s primary goals and elect the most responsible way to achieve those goals.

Robust transparency: Be open about the use of generative AI in content creation. This includes disclosing when AI has been used to generate or assist in creating content.

In addition, consider the three questions posed by Dhar: How did the tool (LLM) arrive at its output? What other ways do we have of testing fairness? Can decision-makers easily understand the input-analysis-output process?

Postscript: Ethical Considerations in 2025

The ethical use of AI is in all of our hands. We can use AI to help us brainstorm, summarise topics within our area of expertise, plan trips, care for our garden and create recipes. AI can stimulate new ideas and help us think more broadly.

The trouble starts if we rely on AI to “write for us” or blindly trust its output. Ethical use not only applies to copyright, privacy and bias. The effects of using AI without oversight can be devastating—even on a purely personal level. For example, no poet or artist wants to lose faith in their craft, and there is no joy in regurgitating words and images.

Apart from inner guidance, fortunately our leaders recognise the need for guardrails. The United States Copyright Office has released two reports on AI and copyright. The first focused on regulating the use of AI to replicate people’s likeness, for example through deepfakes.

The Washington Post summarised the second report’s findings as hinging on human creativity: “… art produced with the help of AI should be eligible for copyright protection under existing law in most cases, but wholly AI-generated works probably are not”.

In Australia, the Federal Government has released AI ethics frameworks and committed $124 million to the National AI Centre, demonstrating the national commitment to responsible AI adoption. In government, the focus is on “ensuring the appropriate, safe and effective use of technology tools, including AI”. Additional recommended steps include implementing an “enrolment mechanism to register and approve staff user accounts to access public generative AI platforms”.

Conclusion

I’m excited about generative AI. Why? My mind runs wild with creative ideas and dreams. Maybe, just maybe, they can now come to life.

And maybe we can all use AI to make life a whole lot more interesting… a whole lot more productive, or simply a whole lot better. Just remain mindful of the “Deeper Meaning of Liff” and reference your inner moral compass.

We’re simply guiding a pattern of code, so we can’t get smug about it. Working toward digital literacy is great. Thinking that our investment makes us superior, or believing we have superpowers for generating awesome prompts… is not. Just ask any Balinese cat that grew up in a Buddhist enclave. Don’t even think about accolades if you bring a mouse home. They will save the mouse, set it free in the woods and put you to shame.

When dealing with AI-generated content, check in on your intuition… it might just hold the key to avoiding interstellar mishaps! Our job is to illuminate truths, foster understanding, and enrich the tapestry of human experience.

AI is new to us like things were new to our ancestors a hundred years ago. Think of the invention of the camera. The wheel turns!

Yours in creativity, humility and ethical use of AI,

Annie.

PS. Dear reader, should you ever feel lost in the digital expanse, “Don’t Panic.” I invite you to join the conversation. Share your thoughts, experiences and insights on ethical AI use. Together, let’s take ownership of responsible innovation and ensure that the future of AI content creation is as bright and benevolent as the minds behind it.

Join me in future posts for some useful tips on prompt design. I’m no engineer!

PSS. I used AI to augment content in the process of creating this post. Just like we use Google for research, because I don’t know it all! Neither does the babel fish or Copilot.

And if you need a stretch after all that, my training partner and I have your back. Take just three minutes of floor time. Look after your body.


Work with Me

Are you looking to harness the power of AI for your business, or would you like me to prompt for you? Whether you need compelling blog posts, engaging content for your business, or polished PowerPoint presentations, I’m here to help. With expertise in AI-assisted writing and content creation, I can provide tailored solutions that meet your needs.

Let’s Collaborate!

If you’d like to learn more, have a look at my workshop on Non-technical AI Basics and Prompt Engineering Training. Contact me via this website or LinkedIn for a quote or check out my services and rates. Let’s discuss how we can work together to bring your ideas to life with creativity and precision.

Looking forward to creating something amazing with you!


References

  1. “AI Detector | ChatGPT Detector | AI Checker” – Copyleaks’ AI content detector tool (https://copyleaks.com/ai-content-detector).
  2. AI Gone Wrong: An Updated List of AI Errors, Mistakes and Failures” – A compilation of AI errors and failures.
  3. AI is creating fake legal cases and making its way into real courtrooms, with disastrous results” – An article discussing the impact of AI on legal systems and the phenomenon of AI-generated fake law.
  4. Bloomberg. (2025). AI’s use in art, movies gets a boost from Copyright Office. Bloomberg. https://www.bloomberg.com/ai-copyright-office
  5. “Deepfake | History & Facts” – An article by Britannica detailing the history and facts about deepfake technology (https://www.britannica.com/technology/deepfake).
  6. “Deepfake” – A definition and explanation of deepfakes provided by Merriam-Webster (https://www.merriam-webster.com/dictionary/deepfake)
  7. Department of Finance. (2024). Cornerstones of assurance. Retrieved from https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government/cornerstones-assurance
  8. Department of Finance. (2024). National framework for the assurance of artificial intelligence in government. Retrieved from https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government
  9. Department of Industry, Science and Resources. (n.d.). Australia’s artificial intelligence ethics principles. Retrieved from https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles
  10. Department of Industry, Science and Resources. (2025). Exploring AI adoption in Australian businesses. Retrieved from https://www.industry.gov.au/news/exploring-ai-adoption-australian-businesses
  11. Department of Industry, Science and Resources. (2025). The National Artificial Intelligence Centre is launched. Retrieved from https://www.industry.gov.au/news/national-artificial-intelligence-centre-launched
  12. Digital Transformation Agency. (2024). Policy for the responsible use of AI in government. Retrieved from https://www.digital.gov.au/sites/default/files/documents/2024-08/Policy%20for%20the%20responsible%20use%20of%20AI%20in%20government%20v1.1.pdf
  13. Explicit Deepfake Images of Taylor Swift Elude Safeguards and Swamp Social Media” – A news report on the spread of deepfake images of Taylor Swift across social media platforms.
  14. “Free AI Detector – Gemini, GPT4 and ChatGPT Detector” – Scribbr’s tool for detecting AI-generated content (https://www.scribbr.com/ai-detector/).
  15. Generative AI and Ethics – the Urgency of Now | LinkedIn Learning” – A LinkedIn Learning course on the ethical considerations of generative AI.
  16. Global AI summit tackles misinformation and deepfakes with a little ‘bot’ of help” – Article on the UN News site.
  17. Here’s What Happens When Your Lawyer Uses ChatGPT”, a New York Times article by Benjamin Weiser.
  18. Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 55 (S.D.N.Y. 2023)” – A legal document from the Southern District of New York Federal District Court.
  19. MSN News. (2025). AI’s use in art, movies gets a boost from Copyright Office. Retrieved from https://www.msn.com/en-us/news/us/ai-s-use-in-art-movies-gets-a-boost-from-copyright-office/ar-AA1y7OCN
  20. “Plagiarism and Copyright Battles in Generative AI” – An article from NuBinary discussing the legal challenges associated with generative AI (https://nubinary.com/blog/plagiarism-and-copyright-battles-in-generative-ai).
  21. Prompt Engineering with ChatGPT | LinkedIn Learning” – A LinkedIn Learning course on how to effectively use ChatGPT for prompt engineering.
  22. Stanford University Students Accused of Plagiarizing AI Model” – A news report on the controversy involving Stanford students accused of plagiarising a Chinese AI model.
  23. The Hitchhiker’s Guide to the Galaxy | Summary & Facts | Britannica” – A summary and facts about the science fiction series provided by Britannica.
  24. U.S. Copyright Office. (2025). Copyright and artificial intelligence, Part 2: Copyrightability report. U.S. Copyright Office. https://www.copyright.gov/ai-report-part2
  25. What Are AI Hallucinations??” – An article by IBM discussing the phenomenon where AI perceives nonexistent patterns or objects.
  26. Who Thrives in a World of Deepfakes and Misinformation?” – An article from Psychology Today discussing the “liar’s dividend” and its impact on the perception of evidence.

In Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy,” a towel is described as the most massively useful thing an interstellar hitchhiker can have. Here are some of its uses:

– Warmth: You can wrap it around you for warmth as you bound across the cold moons of Jaglan Beta.

– Beach mat: Lie on it on the brilliant marble-sanded beaches of Santraginus V, inhaling the heady sea vapors.

– Sleeping cover: Sleep under it beneath the stars which shine so redly on the desert world of Kakrafoon.

– Sail: Use it to sail a miniraft down the slow heavy River Moth.

– Combat: Wet it for use in hand-to-hand combat.

– Protection: Wrap it around your head to ward off noxious fumes or avoid the gaze of the Ravenous Bugblatter Beast of Traal.

– Distress signal: Wave your towel in emergencies as a distress signal.

– Drying off: And of course, you can dry yourself off with it if it still seems to be clean enough.

Moreover, a towel has immense psychological value. If a strag (non-hitchhiker) discovers that a hitchhiker has his towel with him, he will assume the hitchhiker is also in possession of a multitude of other items and may lend whatever the hitchhiker might have “lost”. The phrase “knows where his towel is” became a way to say someone is a person to be reckoned with. It’s a fun and quirky element that has become a cultural icon among fans of the series!

Source: Conversation with Copilot, 18/06/2024, most cited source: YouTube.

3 Replies to “How to Use AI Ethically in Content Creation: A Hitchhiker’s Guide”

Leave a comment