Most of us buy goods on the internet without reading the terms and conditions. We take it as a given that the clauses in these standardised agreements are non-negotiable, and hope that they are in our best interests.
Too often, however, this doesn’t seem to be the case. From laptop makers to airlines to buy now, pay later companies, there are endless rows about whether terms and conditions are fair.
Obtaining hard facts about the size of this problem is difficult. But certainly there are a lot of unhappy shoppers on consumer forums.
Many, for example, have experienced difficulties in enforcing their contractual rights or were unaware of what they agreed to when they bought something. In such circumstances it can be difficult to find free or affordable legal advice, so many probably just give up.
One argument is that people should just read the contract before clicking “I agree”, but most of us don’t have the time or ability to do so. There will soon probably be a solution to this, however. Instead of having to plough through all the small print ourselves, we might soon be able to do it using artificial intelligence.
What exists already
AI tools to analyse legal documents have existed in a very basic form for a while. They can flag up potential issues such as rights violations that the consumer might want to investigate further. But you have to copy and paste terms sentence by sentence because the AI is so limited in how much text it can handle, and they are designed as a guide to reading the blurb yourself rather than removing the need altogether.
There are more sophisticated AI tools that solve the related problem of reading web policy documents. Rather than pasting in text, you upload the relevant URL. The important but narrow focus here is on how web providers use your data. This makes it easier to teach an AI model everything it needs to know – particularly in such a heavily regulated area.
With terms and conditions, the challenge is their varied nature. Vendors are much more free to formulate everything in their own words, which makes detecting and understanding these much more difficult for an AI.
There are also lots of variations between different jurisdictions, such as “solicitor” in the UK and “attorney” in the US. This means that an AI trained with US data may mislead consumers from the UK. Yet it’s often not clear in the existing tools which jurisdiction they are designed for.
You might be wondering if the alternative might be just to copy and paste terms and conditions into one of the latest AI chatbots like ChatGPT, but that’s not a solution either. These general models are not specifically trained on legal texts or legal analysis. This means that any advice they give is just as likely to be accurate, inaccurate or entirely made up.
Fixing the problem
As far as we are aware, no team of developers is trying to create a dedicated terms and conditions AI for consumers using models such as Open AI’s GPT-4, which underpins ChatGPT.
Instead, many AI developers seem to be concentrating on the more lucrative area of creating tools that will automate legal work for law firms and other companies. This could even lead to terms less favourable to consumers, since the focus will likely be on cutting costs rather than improving service quality.
To change this situation, lead author Jens Krebs and his colleague Ella Haig at the University of Portsmouth have been developing a terms and conditions app for England and Wales. When fully developed, it will enable people to copy and paste an entire document into the prompt.
It will then list any terms that might unexpectedly affect the consumer, for example, by failing to meet legislative standards such as the Consumer Rights Act 2015. It will also compare all terms to those used by comparable vendors to ensure that nothing unusual has been slipped in. When it spots something unusual, it will then advise the consumer to read that part before deciding whether to go ahead.
The project is currently at the stage of testing the app on different AI models to see which is most effective. So far Google’s Bert is coming out best with 81% accuracy, testing it against data where the researchers know what the perfect result should be.
Nothing will be launched until accuracy hits 90% to 95%. The hope is that the app will be made available to consumer groups like Which? in 2024 and then go on general release in 2025. The intention is that it will be free to use.
The key obstacle for such a project is the lack of examples of detrimental terms on which to train the AI – exactly the same problem consumers face if they are brave enough to try to judge terms and conditions. The long-term plan for continuing to increase accuracy in the Portsmouth app is to supplement and replace its training data with real data from consumer organisations, the government and consumers.
The hope is that the app will be at the forefront of a new generation of AI tools designed to make terms and conditions less opaque. As well as potentially reducing the number of unhappy consumers, these might also help people who are already signed up to unreasonable terms to prepare and present their case – thereby reducing the need for lawyers.
If such services take off, the hope would be that they also discourage vendors from pushing the boundaries of what is acceptable. If terms and conditions become a bit more favourable to consumers, that would be a huge win for this emerging technology.
Source: The Conversation
Also read: Can Australia capitalise on AI while reducing its risks?