The ethics of AI are changing. Where will it lead?
There is talk now that artificial intelligence will be as monumental an invention as the wheel or fire. We’re not sure if that will be true five years from now, but today, generative AI tools are everywhere, including offices and schools. It’s become the technology version of the Wild West. And like all technological leaps, we need to figure out how best to use it ethically. A lot of experimentation will happen in the months ahead. Will there be rules of engagement written to provide an ethical roadmap? We sure hope so.
AI is now EVERYWHERE.
Artificial Intelligence (AI) is finally growing up. With the explosive influx of free and easy-to-use generative AI tools, more people are experimenting, developing, and worrying about how easy it is to get your new robot friend to write, draw, or manufacture whatever it is you need. In the past few months, there has been a lot of talk of governments developing rather necessary guardrails in the form of codes of conduct, ethics, and other regulatory rules to ensure the technology isn’t used in an immoral or dishonest way. It’s also already been banned from use in some institutions and jurisdictions.
We asked Stephen Cheeseman to weigh in.
As the Head of Legal and Compliance with thinktum, I have been a voice on the ethics of the platform we are building – particularly when it comes to how we might leverage AI technology. We built our solution on an ethical framework.”
And that, to thinktum, was non negotiable.
The US and EU are well on their way to developing these right now. A global, industry-wide set of rules regarding AI use would discourage nefarious actors or unethical organizations, and encourage transparency & growth. As with every new technology, we need to use it for a bit to understand best how to regulate it. Advocates are already pushing for one and we at thinktum agree that international guardrails must be developed and fast.
The European Union put in place the General Data Protection Regulation (GDPR) on May 25, 2018. The GDPR ensures private data remains private throughout the European Economic Area. Any organization doing any kind of business on or offline in Europe must adhere to the provisions within the GDPR.
Here are the main topics within the GDPR.
- The definition of personal data has been expanded to include birthdates, addresses, economic, cultural, social, mental, and even genetic data. The EU defines it as, “any information relating to an identifiable person who can be directly or indirectly identified in particular by reference to an identifier.”
- If a person owns or operates a business or processes personal data belonging to any EU citizen, they must abide by the GDPR.
- According to the law, a firm’s Controller determines how customer’s and business’ personal data is collected and processed. While the Processor is responsible for processing the personal data for the Controller. Each have legal obligations to safeguard personal data and ensure it is processed securely.
- A Data Protection Officer must be named. They ensure that large organizations that have robust large-scale data processing are properly supervised to ensure a secure environment.
- The GDPR was signed into law in 2018 and any organization doing business with the EU must change their processes to be compliant. Scofflaws will be subject to fines by regulatory bodies.
- Any organization collecting personal data must be explicitly consented to before it is allowed. This means a clear explanation of how and why the data will be used and the ability for site users to opt out if desired.
- Any privacy breach, regardless of severity, must be reported with 72 hours of when the breach was initially detected. Any delays may result in incurred fines.
- If a privacy breach occurs, the affected people must be individually notified about the breach immediately after it is detected.
- Complying with the GDPR may differ from firm to firm, depending on the goods or services offered and the size of the organization.
Most countries, like Canada and the United States, are pushing for organizations to adopt the GDPR on their own, until more stringent laws can be put into place in both countries. Adhering to the most rigid regulations ensures firms will be easily compliant with any new legislation put into place.
As a privacy attorney in Canada, the US and UK, I found the GDPR’s evolution since 2018 to be an extremely helpful learning process for other countries developing their own privacy ethics, standards, and laws. The GDPR’s recently announced AI regulation to come into force in 2024 for its 28 member countries will be another helpful step for the US and Canada’s AI regulatory journey.”
The Canadian Government is currently working on developing national privacy laws that will slowly make their way through the parliamentary process. We expect them to become laws in the next year or two.
What it means to the legal community
As our head of Legal, Stephen has both experience and expertise when it comes to technology such as artificial intelligence.
Here’s more of what he had to say about it:
The expansion of cloud-based services has lowered the cost of using AI and thus enables it to be used in many new sectors ranging from industry to insurance. […] With the speed at which new technologies are happening, regulations regarding privacy, data security and AI can’t keep up. This puts a higher demand on technology companies to build in their own ethics standards appropriate to their industry. As thinktum has.”
It also seems that people are becoming savvier about their own privacy and what data they leave behind when scrolling, shopping, or reading these days. New laws are imperative to ensure every organization is making a serious commitment to respect and protect user data because users are demanding it.
More and more organizations are realizing they can use technologies such as artificial intelligence and machine learning to improve processes, increase the access and affordability of products & services, and contribute to a healthier bottom line. The key is to better understand what AI is particularly good at, and allow it to be used in that way.
A new generation of tech workers is coming.
At thinktum, we consider AI as simply a tool, much like a hammer. It needs a human hand to make it work. Many don’t consider the human element to AI, which we refer to as augmentation; and defined as the partnership between humans and AI. Humans write the algorithms, manage the data sets, and streamline prompts in order for generative AI tools to be able to work. AI isn’t capable of doing anything on its own. And that’s an important thing to remember as AI tools become more and more ubiquitous.
The fact that language and image recognition of AI systems are now comparable to those of humans – is a wakeup call for regulators and business ethics. While AI has been around in some form since 1950, it has been the expansion of cloud-based services lowering the cost of using AI by 330% since 2000. This is what truly enables it to be used in many new sectors ranging from healthcare to industry to insurance.”
Workers will begin to realize that as AI is built into more and more business processes, folks will see the value in pursuing augmentation-related technology careers. It’s vital that ethics be included in college & university curricula to ensure ethics are baked into every aspect of development and deployment. That way, ethical considerations are top-down as well as bottom-up which will go a long way to assuage fears that AI will be used as a tool for improper or even immoral purposes.
Here’s our expert again.
Our education system may be the best wake-up call. More than ever, the ethics of technology are coming under scrutiny. Traditional ethics education was part of law and medicine but today, universities offer full undergraduate degrees in ethics with credits recognized for technology and engineering degrees.”
As AI tools continue to be built at rapid-fire pace, and more apps like ChatGPT coming online, the benefits of AI are up to individual users. thinktum feels it’s imperative that the ethical issues of the platform also be understood and managed including regulatory mandates beyond ethical guidelines. Luckily, there doesn’t seem to be a counterargument developing, which simply proves that AI users seem to welcome having safeguards put in place to ensure a level and legal playing field.
Looking forward
Technology advancements help humans focus on doing what humans do best – think creatively.
Tech for good is more than a catchphrase to thinktum, it’s how we think and do business.
Do you have a final thought you’d like to weigh in with, Stephen?
We also keep in mind the importance of respecting the unmatchable value of the human mind. And thus, the concept of augmentation! The input of humans known as augmentation in the AI world, plays an important role to ensure that ethical and lawful standards are achieved”
Artificial Intelligence without humans doesn’t work. That’s why we’re so bullish on augmentation. It’s that human/technology partnership that allows it to be so exciting. As long as humans are involved, we should be able to ensure guardrails are in place to safeguard its use and ensure the augmentation that occurs is beneficial to humankind.
If you would like to learn more about augmentation and how thinktum can assist with your customer journey, reach out and say hello. You’ll be amazed at how easy it is to get started.
This article’s featured collaborator
Stephen Cheeseman, Head of Legal and Compliance |
Stephen Cheeseman, Head of Legal and Compliance |