News  /  Blog

Top ten tips on using AI

Published on 23 November 2023

Earlier this month, politicians and industry leaders from across the globe gathered at Bletchley Park to sign a declaration on the future of artificial intelligence (AI).

The Bletchley Declaration agreed an initial mutual understanding of AI and the risks associated with it, setting out how countries will work together to ensure human-centric, trustworthy and responsible use of the technology.

Though the AI summit considered the future impact of the emergent technology, occupier Planet IT hosted an event at Milton Park which put a greater focus on the here and now, advising on how to use AI in a safe and responsible way.

James Dell, Head of Technical Architecture & Associate Director at Planet IT, explains more: “With AI constantly in the press and becoming a staple in the everyday computer landscape, we wanted to talk about how the generative technology can be a useful tool when used correctly, and is not as scary as people might think.”

Planet IT reveals its top ten tips when using AI, which it hopes will provide a handy toolkit for those looking to make the most of the technology in a safe and secure manner.

1. Protect confidential data

Training generative AI can expose businesses to data loss. Users should be sure where data inputs are going and who has ownership of it. With platforms like ChatGPT, all data is used to continually train the model moving forward. This means your IP or personal data can end up being incorporated into, and become part of, a global model.

2. Don’t submit sensitive info

11% of data that employees paste into ChatGPT is confidential, and an estimated 4% have inputted sensitive data at least once. The most likely cause of data loss via AI will be passing information to tools that should not be shared, or intellectual property which shouldn’t belong in the public domain.

It is also likely that hackers will start targeting AI tools to extract custom taught data in the future as this will be where the value for potential ransom will be. People should think twice about the content they input into AI tools and consider the reputational and financial risks associated if it fell into the public domain.

3. Learn from bad data

Any system is only as good as the data that is used to train it. If you have used data which is not valid to train your models and systems, then you may need to start again.  

The data your systems ingest is crucial to maintain good system health, this is even more critical when it comes AI-based systems.

4. Check for bugs

The pace at which companies can deploy generative AI applications is unprecedented in the software development world. The normal controls on software development and lifecycle management may not always be present. 

Code generated by AI is very rarely completely bug free and should always be checked and validated.

5. Maintain accuracy

One of the most common issues for AI-based systems in 2023 has been misinformation, be that via campaigns run in relation to the War in Ukraine or the state of global politics.

AI is undoubtedly being used to generate and propagate disinformation. Therefore, it is critical that the data coming from AI is checked for accuracy.

6. Keep it fresh

The more that AI is used to generate data the more that data is fed back into AI.

You get diminished returns and ultimately, everyone starts to sound the same. This is already an issue with companies using generative AI for content creation.

7. Monitor for bias

All AI is biased by default, normally based on the way the developers built it. You need to be aware of this and how you intend to handle this, as it could negatively impact business decisions.

When teaching or developing onto core models you are likely to be presented with unconscious bias that you didn’t expect to get, something which can already be seen with ChatGPT.

8. Build trust

When running any AI based system for your business you need to ask the following questions: Do I trust the developer? Do I trust the sources the AI was trained on? Is there a potential conflict with our business? Do we trust who else is using the tool? How is our data going to be used? How do we cut ties with the tool?

9. Don’t use personal data

It is easy for AI to be used to gain enough information to spoof a person and steal personal data to assume an identity. People should be mindful of inputting names, contact details and other personal information into the tools. To help increase security, keep personal social media profiles private and never share payment details or identification documents in the public domain.

10. Take responsibility

The ‘AI wrote it’ excuse… Well, who published it? Who is responsible for the content? One of the biggest and currently untested issues with AI is who is liable for automated actions.

Currently there is no case law for this, but in the next 12 months, expect someone to take OpenAI to court about how AI has been used. For now, assume you will remain liable.

Want to hear about future events at Milton Park?

Sign up to the Milton Park newsletter for more information and check out the events page on our website

sign up to Milton Park’s newsletter

Keep up to date with latest news, events and opportunities

Subscribe to our newsletter