There is still much that organizations need to figure out when it comes to how best to implement artificial intelligence (AI), a technology that has or will impact them all, whether they like it or not.
The advances have been staggering, with ChatGPT and generative AI leading the charge, contends national Canadian law firm Norton Rose Fulbright Canada LLP (NRF), who earlier this month held a virtual summit revolving around a litany of AI-centric issues.
“AI and machine learning,” it stated, “have the potential to outperform humans – but at what cost?
“But it’s not all bad; AI has significantly impacted the healthcare system, helping revolutionize diagnosis, treatment and disease prevention. AI algorithms can be leveraged to scan and protect against financial fraud, with the potential to beef up a company’s cybersecurity. Conversely, AI is being used by bad actors to attack companies and penetrate state-of-the-art security.
“And there is a myriad of other legal questions: Who owns the knowledge? Who is liable for a breach of privacy? How will AI impact the insurance industry? How do governments regulate this borderless system? How do companies protect their data and copyright?”
What is certain, for now at least, is that there are far more questions than answers, as witnessed by one session that delved into identifying and managing the legal risk associated with AI.
Among the speakers was Handol Kim, co-founder and chief executive officer (CEO) of Variational AI, a Vancouver firm that has developed a platform based on generative AI (GenAI) and machine learning advances that the company says, redefines “the economics of drug development.”
Speaking specifically about the ChatGPT craze, he said, there is a “tremendous amount of hype in popular culture about GenAI, and I think a lot of that stems from the fact large language models (LLMs) operate in the realm of language. As human beings, that’s how we relate to one another and that is how we judge intelligence. If someone can write or speak well, and use language, in a very advanced sense, we perceive them to be intelligent.
“The question then becomes, are these actually intelligent? No, because that’s actually real AI, that’s not machine learning and that is still very far away.”
An LLM, he said, can only understand language, but does not understand the context that the word or the structure relates to: “(It) does seem intelligent, because that’s how we humans gauge that intelligence.
“However, I would say that, given some time, that they’ll continue to get better and better and better. And the disruption will continue to accelerate, both for beneficial opportunities that will come about in the business landscape, but there is also going to be quite a bit of disruption in terms of the status quo and how we do things today.”
Following Kim’s presentation, moderator Jesse Beatson, an associate with NRF Canada’s Toronto office, asked Justine Gauthier, director of corporate and legal affairs with Mila – Quebec Artificial Intelligence Institute, to discuss what risks AI could bring to an organization.
One area where there are not only many questions being asked, but a lot of uncertainties, revolves around intellectual property, and who owns the content used to train an AI system, she said.
“For example, can content that was generated using an AI system be protected by copyright or be patented. There is a wide array of lawsuits right now, especially in the United States, against certain companies that have trained these huge AI models using data that can be found, virtually everywhere on the internet, but that is protected by copyright.
“There are a lot of unanswered questions in that regard as to knowing whether training an AI system with copyrighted data or copyrighted works is actually copyright infringement or not.”
Prior to the session, Imran Ahmad, a partner with the firm and its Canadian head of technology and co-head of information governance, privacy and cybersecurity, held a fireside chat with Marcus Brown, the president of Theia Markerless, Inc., the Kingston, Ont.-based developer of AI-driven motion capture technology.
“Measuring human motion has been a growing technology sector for the past 40 years,” the firm states on LinkedIn. “Whether the analysis is used for sports performance, clinical assessment, or animation, the technology has been intrusive, and requires markers or sensors to be placed on the participant.
“Our objective at Theia is to radically change the biomotion industry. We capture synchronized video from an array of cameras and then use deep-learning and artificial intelligence to accurately perform the same analyses that previously required cumbersome sensors.”
Ahmad asked Brown what area an organization about to enter the AI space should invest their money in.
“Even with the softening economy, and with budgets being tighter, I would strongly recommend that the legal framework is set up correctly and flexibly so that it can evolve as the task evolves,” he replied.
“Unfortunately, there isn’t a lot of value in curating data or accessing data when you don’t have the right support. And this will become absolutely an issue within the next few years, where companies have been training algorithms or doing any sort of analysis on data that they do not own.”
Even in the tightening economy, he said, “I recommend finding the money to make sure that you are compliant, and that you have received the right legal advice, especially given that there are so many different jurisdictions where there are different legal frameworks that need to be applied.”The post NRF summit takes a deep dive into legal world of AI first appeared on IT World Canada.