In today’s column, I examine the latest advancements in human-AI collaboration and explain how these amazing capabilities will materially impact your everyday use of generative AI and large language models (LLMs). To illustrate these advancements, I highlight the newly widely released OpenAI ChatGPT specialized add-in known as Canvas which has garnered a great deal of media attention, deservedly so.

A key takeaway is that if you haven’t heard about, seen, or used these innovative AI-based collaboration tools, you are in for quite a surprise. In some mind-bending respects, this might cause you to rethink your use of AI and gain a fresh perspective on what generative AI can achieve.

The bottom line is that it isn’t about what AI can do for you, but what you and AI can do together.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For my coverage of the top-of-the-line ChatGPT o1 model and its advanced functionality, see the link here and the link here.

The Grand Act Of Collaboration

I’d like to start with the nature of human-to-human collaboration, after which we’ll shift to human-AI facets entailing collaboration.

When you collaborate with a fellow human, the overall notion is that you intend to work together to accomplish some kind of task or endeavor. For example, suppose you’ve drafted a memo at work and a co-worker has offered to review the draft with you. They are a handy second pair of eyes. The co-worker might spot some portions of your memo that could use rewording. Maybe the co-worker will identify missing content that ought to be added. Etc.

One means to collaborate on the review of the memo would be to go back and forth via email such that you email a first draft to them, the co-worker edits it and returns the revised draft to you, and so on. This series of cycles via email is bound to be somewhat sluggish and not the easiest or preferred way to do things. In a sense, it is very hard to have a proverbial “meeting of the minds” when you are emailing successive drafts to each other.

You’ve undoubtedly undertaken that kind of awkward and frustrating activity and realized by the school of hard knocks that it isn’t especially streamlined.

Sad face.

There Is A Better Way To Collaborate

What would be a better form of collaboration?

Ideally, you’d like to do the review in real-time and allow each of you to immediately showcase on a referent draft the precise changes or issues you have with the writing. There would be one document that both of you have direct access to. Either one of you can highlight or point out passages that might require modification. Changes can be made quickly and visually evident to each of you. All eyes are seeing the same thing.

Voila, your collaboration becomes a true semblance of collaboration.

There are three main factors about this:

  • (1) Communication. You want to make communication as frictionless as possible and as immediate as possible.
  • (2) Coordination. You want to ensure that coordination during the process is smooth and aids the effort rather than getting in the way of things.
  • (3) Task. You want to keep the task that is at hand as evident or at front and center as feasible to ensure that both of you are seeing the same things at the same time.

Happy face.

Conventional Generative AI And Collaboration

Shifting gears, let’s dive into the kind of human-AI collaboration that you experience in conventional generative AI. You will see in a moment how the above precepts come into play in an AI context. Hang in there.

Here’s a dialogue that you might typically have with any of the major generative AI apps.

  • My entered prompt: “Here is a memo that I’ve drafted, and I want you to carefully read it. I aim to get your feedback and possibly have you suggest or even make changes to the draft. We are to do this together. Don’t just summarily make changes. I want this to be collaborative.”
  • Generative AI response: “I understand. I’ve reviewed the draft. The second line in the first paragraph is rough to read and I recommend it be improved. In the third paragraph, the last several lines do not seem to fit with the topic at hand. Are you sure that those lines belong in the memo?”
  • My entered prompt: “I’m looking at the second line of the first paragraph and I think it reads really well. What about that line bothers you? In terms of my third paragraph, those several lines that you are criticizing are completely sensible to me, and if you look at the fifth paragraph of the memo, I think it will be sensible to you too. Please take that into account.”

That’s a rather common interaction when conversing with generative AI on a matter such as figuring out some possible edits for a drafted piece of content.

I have a question for you about the above dialogue.

Does the dialogue seem particularly conducive to acting collaboratively?

On the one hand, you could say that the generative AI is doing a useful job of offering insights into the draft. However, the person interacting with the AI must go back and forth about which line and which wording is at issue. Trying to proceed in this way is arduous and likely exasperating.

There must be a better way to accomplish this.

Newer Form Of Human-AI Collaboration

Suppose that we redid the interface that is involved in this human-AI collaboration.

Rather than a Q&A dialogue that is a sequence of back-and-forth iterations about something that is out of view, let’s open a second view or window that sits adjacent to the prevailing interaction. This second view will showcase the drafted memo. Thus, the person can see the draft and the AI can highlight which lines and which words are being discussed.

Furthermore, the view of the draft can be highlighted by the human, doing so to emphasize to the AI what portions the person wants the generative AI to focus on. Nice.

Let’s up the ante and allow the human and the AI to make changes directly on the draft memo. Either one can do so, freely, immediately, and with the attention of the other. The draft at this juncture is being displayed in the second view and persistently stays there as the dialogue between the human and the AI takes place.

Changes are immediately displayed. If the human makes a change, it is apparent what change was made because it is directly in the text, rather than cumbersomely describing the change that the user was thinking of making. Likewise, the AI can make a change in the draft, visually so, and the user sees exactly what change the AI was mentioning.

I hope that you can sufficiently envision what this new setup for human-AI collaboration looks like.

The premise is that whatever body of text that you and the AI are conversing about can be displayed in a second view, during which it is actively available for changes by either party. No longer do you need to waste time and effort trying to convey to the generative AI what you want to change, nor does the AI need to indirectly describe what the AI suggests being changed.

It is one editable view that is commonly shared by both.

Boom, drop the mic.

When Human-AI Collaborations Improves At Scale

I realize that the idea of having a shared editable view for human-AI interaction seems at an initial consideration to be an obvious addition for generative AI. Some cynics are bound to exhort that this isn’t worth much of a hullabaloo.

Well, first of all, haters are going to hate. Secondly, yes, the approach of having a shared editable view is something that has been worked on in AI labs, but there hasn’t been that big of a commercial widespread availability of this kind of structure. Doing this at scale is a game changer.

When I say at scale, imagine that millions upon millions of people might end up using this type of human-AI interface.

How so?

OpenAI has now made available on a widespread basis their relatively new add-in known as Canvas and it works seamlessly with the widely and wildly popular ChatGPT. There are reportedly over 300 million weekly active users of ChatGPT. At this juncture, they will soon have or might already have Canvas available to them due to this expanded release (note that Canvas was available on a limited or beta basis for the last few months).

Canvas provides the second-view capability that I’ve been describing.

It Is Here And Now And In The Future Too

Your dialogue with ChatGPT sits to the left and the second view sits to the right.

The dialogue proceeds and meanwhile, the second view is jointly able to be explored and edited. Can you visualize in your mind’s eye what this looks like? I realize this might be hard to envisage in your head. Consider visiting the official OpenAI web page that shows how Canvas works or search on any reputable social media site for videos posted by people who have been making use of Canvas.

An interesting question is how many people will opt to use Canvas.

Some users of ChatGPT might not grasp what Canvas is or can do, therefore they won’t invoke it. Others might know that Canvas is there, but for various reasons don’t want to lean into it. This is one of those new pieces of functionality that will likely take time for people to get accustomed to using.

My prediction is that eventually, the use of a second-view approach will be commonplace for most generative AI apps. Users will expect it. Rather than the feature being a novelty, it will be a must-have piece of functionality. Indeed, other AI makers already have such capabilities in the works and like anything else in this highly competitive AI marketplace, every AI vendor will have to stay at the leading edge or fade into oblivion.

Expect too that variations and advancements in this type of capability are going to rapidly emerge. What kinds of amplifications? If two views are good, maybe allowing for three views is even better. Perhaps four views, five views, or as many as you like (some number of n-views). There will be a Darwinian process of variations proffered, some of which people will actively relish, and others that they won’t, ultimately winnowing to what people, by and large, want to have available.

Intriguing Questions Of Sensibility

There are fascinating behavioral ramifications. Allow me a moment to examine some aspects that maybe don’t immediately come to mind on this capability. Get ready to think outside the box.

Who should initiate the use of a shared editable view?

Your first thought might be that of course the human decides whether to engage the capability. Humans are supposed to be in charge of AI. Period, end of story.

Hold on for a moment.

Suppose a person using generative AI doesn’t perchance realize they could benefit by using a second-view collaborative feature. Maybe the idea of doing so doesn’t pop into their head. Or perhaps they are unfamiliar with the feature and don’t realize how it can help.

We might allow the generative AI to automatically initiate the second view. It goes like this. A user is carrying on a normal dialogue with AI. At some point, the person indicates they need to write a quick message to tell someone that foul weather is expected in their area. Based on that comment, the generative AI could discern that the user intends to write a message, which is a suitable circumstance for invoking the second-view capability.

Voila, the AI does so.

In the case of OpenAI’s Canvas, the AI researchers wrestled with this kind of automatic invoking of the capability. They have established their AI to do so but realize that users might get irked. How so? If the invoking happens too much, a user might become steamed and feel like the AI is overplaying its hand. A gentle touch is needed for the AI to discern how often to take such actions. This is one of those parameter-setting aspects.

Another Sensibility Puzzler

Here’s another puzzler.

Suppose the generative AI examines a draft that is jointly being worked on with a user and computationally determines that the draft needs an utter overhaul. The draft as it sits currently is a mess and contains fragmented sentences, misspells, and otherwise is a piece of writing travesty.

Should the AI take the bull by the horns and summarily rewrite the entire draft?

You’ve certainly experienced this same aspect in real-life-based human-to-human collaborations. The person you are collaborating with announces in a loud voice that your draft is a pile of junk. They then grab it from you and proceed to rewrite the whole thing. You sit there, perhaps in mild shock, watching the other person opt to redo your hard work.

Admittedly, sometimes you are perfectly fine with the other person taking charge. One issue is that if this is supposed to be a learning experience, such as writing something for a class at school, numerous AI ethical questions arise, see my coverage at the link here.

OpenAI researchers did some handwringing on the same issue related to Canvas. How far should the AI go in doing a rewrite? Should the AI proceed on this or only if the user requests it? Even if the user requests the action, will it be the proper thing to do such that the AI has now essentially written the content rather than the human?

This is a heady matter that society in general is going to need to figure out, including whether new AI laws are needed to deal with these human-AI ethical dilemmas, see my discussion at the link here.

Coding Of Software Is In This Same Realm

Changing to another angle on this, let’s brainstorm on how else a second-view capability could be utilized.

For those who write software, they probably already use some form of code editing tool that assists in composing and testing code. In that sense, they are familiar with a second-view approach. Few of those second-view capabilities do much in terms of actively devising and testing the code, and not often in a highly collaborative way (notice to trolls, yes, some tools will do so – I’m not saying that this doesn’t exist).

OpenAI has set up Canvas to enable software coding, which is in addition to performing collaborations on text composition such as writing memos, stories, essays, narratives, poems, and so on. The software side includes being able to run your code and having the AI examine testing results to then give suggestions on where bugs might be or otherwise make the code better.

An allied topic I’ve covered in other postings is whether we are headed toward the demise of software engineers as a profession, whereby AI does all needed coding from A to Z. The AI comes up with the code, tests it, and rolls it out. Away goes the job of human efforts of writing code and developing systems. Should you be worried if you are a programmer? See my analysis at the link here.

Human-AI Collaboration Is A Moving Target

We are just in the early days of human-AI collaboration as it pertains to the use of generative AI and LLMs.

Imagine that you are using generative AI and have an article that you are writing. The article is to contain text, various figures, graphics, suitable images, and maybe have audio and video attached too. The use of a second-view or shall we say n-view is going to accommodate all modes or mediums. The AI isn’t going to only help with the text composition. All the components will be shown in some number of allied views, and you and the AI will work hand-in-hand to compose, edit, refine, and finalize it. For the latest on text-to-video, see my discussion at the link here.

The entire kit-and-kaboodle.

This seems quite exciting. And it is. Meanwhile, we need to ask hard questions about authorship, copyrights and Intellectual Property (IP) rights, plagiarism, and other bleaker sides of these AI advances.

We must keep from going over our skis, as they say these days.

A final remark for now.

Charles Darwin famously made this assertion about collaboration: “In the long history of humankind (and animal-kind, too) those who learned to collaborate and improvise most effectively have prevailed.” This suggests that we are smart to pursue human-AI collaboration. That is going to be our future, and there’s no turning back the clock.

Hopefully, we will be smart enough to keep human-AI collaboration in proper check and avert the dreaded existential risks that lurk within that weighty proposition. Should we collaboratively discuss this with AI, or might that be a bridge too far?

Time will tell.

Share.
Exit mobile version