The UK government has launched an AI white paper regulating the use of AI in the United Kingdom.
The white paper sets a new approach to regulating AI to ‘build public trust in cutting-edge technologies’ in a plan to make innovation, growth, and job creation easier for businesses.
This plan is supposed to help the benefits of AI get out there for everyone to use.
AI currently contributes nearly £4 billion to the United Kingdom’s economy.
This plan is set to follow a brand new expert taskforce that will help with building the UK’s capabilities in foundation models, for example, large language models like the widely used ChatGPT.
Alongside this, is £2 million sandbox trial that helps businesses test AI rules before they reach the market.
What are the worries?
Many believe that AI could threaten jobs as it is growing at such a fast rate.
With AI becoming smarter and smarter, the more it has the potential to be quite dangerous.
Some people are already using ChatGPT maliciously, as ChatGPT becomes more intelligent, this problem will become worse.
There have been concerns that AI could, if exposed to more content from the internet, pose a risk to people’s privacy and human rights.
Racism, sexism, and other offensive content are plastered all over the internet, and giving AI access to this may see it form some undesirable opinions or biases against certain groups of people.
This has been seen previously with some companies using AI to review job applications.
The AI ended up showing biases and denying women or people of colour’s applications even if they were fit for the job.
The UK government wants the current regulators to regulate AI in their sectors, with current laws.
This is so there isn’t another third-party regulator in charge or any new laws, this would take extra time, whereas utilising current laws and regulators within each sector, means quicker regulation.
Unfortunately, it will still require significant investment.
The white paper has five main points:
Safety, Transparency, Fairness, Accountability, and Contestability.
The European Commission has released proposals for regulations named the ‘Artificial Intelligence Act’.
These proposals would cover more than China’s AI regulations, which are past the proposal stage.
China already has rules in place for companies to notify users when AI algorithms are being used.
The European Commission’s proposal includes a grading system that would grade how harmful AI products could be.
An AI product that could diagnose medical conditions would be regulated more heavily than an email security product that utilises AI.
Thierry Breton, the EU’s Commissioner for Internal Market has said that the AI Act’s goal is to “Strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”.
We stay up to date with technology and its development, which is why you can trust us with your IT.
If your business needs help with IT, contact our team.
Itek provides IT solutions for your entire IT infrastructure. Your business can experience a cost-effective service at a predictable fixed rate, removing the burden from you and your team and freeing you to focus on your goals.