Balancing the enormous potential of AI with a realistic view of the risks

minute read

Key Takeaways

Like the rest of the world, our team has been caught up in the explosion of AI tools and capabilities. Our clients are leveraging it, while at the same time trying to determine guardrails to corral any negative side effects. Within our firm, we began by creating research projects for our team—dividing up the various platforms among staff to explore relative strengths and weaknesses. It’s already finding it’s way into our work in large and small ways. It extends and retouches imagery in an instant, and it’s particularly good at summarizing or distilling information. We’re experimenting with its ability to perform more complex tasks, such as generating conceptual mock-ups and building project schedules.

Recently, though, we’ve seen several clients banning or severely curtailing the use of AI for work we do on their behalf. One new client had a clause in their master services agreement (MSA) specifically forbidding the use of AI. This AI prohibition isn’t a Luddite point of view—there are three major areas of concern at the root of it: security, ownership and accuracy.

No items found.

Using AI effectively, avoiding critical pitfalls

Security

Security is probably the biggest roadblock we see companies worrying about. AI is becoming so ubiquitous that a careless employee or supplier might not think about the implications of having AI edit a confidential document or transcribe a conference call. But, where does that input go? Is it stored? Is it used as data to feed the AI model? Could it re-surface somewhere else? The unknowns are keeping cybersecurity teams up at night. Many companies are racing to implement their own “private AI” tools—ones that involve a closed environment, where data is secure and access is restricted to internal audiences.

Ownership

Ownership is a significant concern—particularly in the area of generative AI imagery. In August of 2023, a court in Washington, D.C. ruled that AI-generated work is not considered to be the work of a human creator. And on that basis, it can’t be copyrighted. This is likely to be the subject of ongoing litigation for some time, but as-of today, the work can’t be protected or “owned.” This is a huge concern for companies that are accustomed to very clear provenance and copyright in all their endeavors. Adobe has tried to get around this hurdle by training its AI exclusively with images from its own library (which all has clearly defined ownership and permissions). Another major design software company just announced a research project in 3D generative imagery, noting that it was trained in “both publicly available and licensed data (no customer data was used)”. The need to add that qualifier to reassure customers shows the sensitivity that exists about the source of inputs. We imagine that AI retouching and manipulation tools used on licensed/owned images will gain acceptance long before the use of wholly AI-generated imagery.

Accuracy

Accuracy in AI-generated content is a well-documented challenge. AI systems can make things up or “hallucinate.” In one well documented example, a lawyer used ChatGPT for research and based his legal arguments on cases that were later shown to be completely fictitious. When asked to generate charts or maps, the art can often look real—but it’s impossible to know what the underlying data was (if there was any). So, while ChatGPT and other large language models (LLMs) can be incredibly useful, the resulting content needs to be viewed with a healthy degree of skepticism. Much like images, it has immediate useful applications in editing content, but can’t be trusted to generate content from scratch without significant human fact-checking and cross-referencing.

Putting what’s best for our clients first

The AI landscape is changing every day, and evolving at an exponential pace. While the giddy enthusiasm is contagious—caution is warranted. One challenge we’re going to face is knowing exactly when and where AI is being inserted. Within Adobe Acrobat, for example, it is currently possible to disable the AI features. But will that be true for every software we use going forward? Avoiding AI may be like a diet where you’re trying to avoid salt. Before long, you’ll find it’s in just about everything. We feel that the key is to know as much about AI and its applications as we can. That puts us in the best position to respect and enforce our client’s policy choices—and to make our own strategic decisions as well. It involves an investment in research and open lines of communication with our partners. There will likely be creative and productivity benefits beyond what we can imagine today. But like anything, we need to understand the risks and costs.

No items found.
No items found.
No items found.
No items found.
No items found.
No items found.