AI Goes to College No. 5 - Why you need a human in the loop, some solid advice, and a useful tool for preparing slide decks
Helping higher education professionals navigate generative AI
Welcome to Issue #5 of AI Goes to College
If you find this newsletter useful, please share it with your colleagues by sending them to
https://www.aigoestocollege.com. They may even thank you (I know I will!).
This week's issue covers:
Why you need a human in the loop
New functions for ChatGPT Teams
My alma mater (USF) is starting a new college
Solid advice from OpenAI's COO
Using AI to prepare a panel report
Speech in ChatGPT
SlideSpeak - a quick way to prepare slides from a document
Rant of the week
Why you need a human in the loop
One of the key pieces of advice I give people about generative AI (GAI) is that for anything important, you need a human in the loop. In other words, don't just trust AI's output, make sure an actual person reviews it before you rely on the output.
About a year ago, Vanderbilt University justifiably caught some flack about using ChatGPT to develop a message about a mass shooting at Michigan State University. You can read more about the incident here: https://www.theguardian.com/us-news/2023/feb/22/vanderbilt-chatgpt-ai-michigan-shooting-email
Generative AI will sometimes create some crazy results, so someone needs to be responsible for checking its results before they're used.
The other day, I experienced a much funnier and more trivial example of how GAI can go wrong. Some doctoral students and I were meeting for our weekly writing sprint sessions. As often happens, I got the group distracted by talking about AI. I've been using an interesting tool, Beautiful.ai, which uses AI to create presentations. You can give Beautiful.ai a prompt and it will create a presentation based on the topic. I talked about how this approach might be fine for some things, but I wanted a tool that would work a little differently. We decided to do a little test and I asked Beautiful.ai to create a presentation about a technical topic called database normalization. It did a surprisingly good job. (I'll do a full review of Beautiful.ai for a future edition.) In one of our iterations of the presentation, we got a very interesting result. Beautiful.ai created this for the title slide:
I don't know about you, but that sure looks like a guy lighting a blunt (marijuana cigar). There are a number of troubling aspects to this image. I'll get into some of the other problems later, but for now I'm going to focus on the fact that the image has absolutely nothing to do with database normalization. We laughed about this, and then it hit me. At the same moment, I saw one of the students' eyes light up as it hit him in the same moment. We both started chuckling at the same time.
Norml (that's not misspelled) is an organization that works to reform marijuana laws in the United States. "Norml" and normal (as in normalization) are pretty close, so we could see how the AI engine decided to use that image.
What if I blindly used those slides in class? Many students might not have noticed, but some would think it odd (or worse). I would have been embarrassed, to say the least. But the reactions could have been much worse. Take a closer look at the image. It seems based on some very bad stereotypes -- the finger tattoos, the gold Rolex-looking watch, the apparent ethnicity of the person in the image -- this could be seen as implying that certain types of people are just some week smoking drug dealers. I'm sure that's not the intent, but it's a reasonable interpretation.
This mostly funny incident really drove home the need for a human review of anything that will be used publicly or that will be relied on for decision making. Here's the bottom line:
If it's important, have someone review it carefully.
It seems like that shouldn't have to be said, but as we'll see over and over, the allure of efficiency is great, so great that it sometimes overrides common sense.
GAI news
New functions for ChatGPT Teams
Last week OpenAI announced some updates to ChatGPT Teams. ChatGPT Teams is the collaborative version of ChatGPT Plus. It let's you share GPTs with members of your team and it excludes your inputs from GPT's training data by default. ChatGPT Teams is worth checking out if you do collaborative work with ChatGPT. There's more info here: https://openai.com/chatgpt/pricing
The text of the original email is below. I'm not sure I see anything that justifies switching to Teams if you don't already have a ChatGPT Teams subscription. But these are some nice additions. I really like the ability to see ratings in the GPT Store. The GPT Store is a bit of a mess at the moment, and being able so see ratings and starter prompts is a step in the right direction.
"Ideate faster when creating images" sounds really good, but in my limited testing it really doesn't do much if you're already good with DALL-E. Basically, it just adds some instructions to the end of your prompt when you click on one of the style buttons (see below). Clicking on the button with the arrows changes the style selections. I'm more excited by the ability to select aspect ratios quickly by using the button on the right. There are three choices, square, widescreen, and vertical.
Just for fun, I put in this prompt. ChatGPT added the part in bold when I selected folk art and widescreen.
A loud goat disturbing customers in a coffee shop with its loud bleating, folk art, widescreen aspect ratio.
It's kind of a fun image I suppose. My favorite is the grumpy looking old guy in the lower right corner. Everyone else looks upset, he just looks grumpy. There's also a disturbing number of random, disconnected hands in the image.
To me, the best new feature is the ability to enable multi-factor authentication. MFA is a good idea generally, but it's especially important if you're dealing with anything sensitive.
Even though I'm not all that excited by these new features, I do appreciate that OpenAI's efforts to keep evolving.
-- Original email --
Hello,
We're excited to share a series of updates that improve collaboration, security, and your overall ChatGPT Team experience. Here's the latest:
Amplify best practices with internal GPT sharing GPT builders can now share GPTs with specific individuals within a workspace, ensuring the right set of people have the right access. GPT builders can also give certain workspace members permission to view their GPT’s configuration or duplicate it, making it easier for teammates to build on or adapt GPTs to better suit their needs.
Easily manage GPT changes with version history To improve the GPT builder experience further, we’ve added the ability to view the version history of your GPT, copy over text and configurations from past versions, or restore a previous version. (Note that if you have Custom Actions on the GPT, you’ll need to re-authenticate upon restoring a previous version.)
View ratings and starter prompts in the GPT Store We’ve added a detailed view for third-party GPTs with ratings, builder verification, starter prompts, and more, making it easier to discover high-quality and relevant GPTs in the GPT Store.
Ideate faster when creating images in ChatGPT In our DALL·E GPT, you can now get inspiration on styles (e.g., hand-drawn, felt, 35mm film), and choose between different constraints (e.g., widescreen aspect ratio).
Add extra security with multi-factor authentication Team workspace members can now add an extra layer of security to their sign-in process with multi-factor authentication (MFA). Once enabled, MFA will apply upon log in across a member’s OpenAI accounts, including ChatGPT and the API Platform.
We hope you enjoy these new features. Thank you for choosing ChatGPT Team!
— OpenAI
-- END --
USF is starting a new college
The University of South Florida, where I earned my Ph.D., recently announced the formation of a College of Artificial Intelligence, Cybersecurity, and Computing. Here's the vision for the college, taken verbatim from USF's announcement (https://www.usf.edu/provost/initiatives-special-projects/caicc.aspx).
The vision for this new college is to offer undergraduate and graduate degree programs that prepare USF students for high-demand careers, empower faculty to conduct scholarly research that leads to groundbreaking discoveries and technological advancements, grow industry partnerships, and promote ethical considerations and trust throughout the digital transformation underway in our world.
USF plans to launch the college in the fall of 2025, but that seems a bit aggressive to me. At this point, they haven't even settled on the governing structure of the new college, so they still have quite a bit of work to do. The task force guiding the effort has some really smart people on it, including Alan Hevner, who was one of my faculty, and Kaushik Dutta, who is a professor in my old department. So, the prospects seem solid to me. I'm going to reserve judgment until there are more details, but I certainly wish my alma mater the best of luck.
Tips of the week
Solid advice from OpenAI's COO
Recently, I was reading one of my favorite AI newsletters, The Neuron (
https://www.theneurondaily.com/
) and a short mention of some remarks from Brad Lightcap, Chief Operating Officer at OpenAI (the makers of ChatGPT). Mr. Lightcap gave some very solid advice for business leaders who are confused about making AI useful. The article quoted him as saying "90% of the value comes from giving people access to the tool and not thinking too much about it."
The Neuron writers went on to give some additional tips. (These are direct quotes unless in brackets):
Pay for the good stuff. Use ChatGPT Plus or Claude Pro ($20/user/month). The upgrade is like going from black & white to life in full color. [I'd add Gemini Advanced to the list.]
Give permission and time, not just access. Encourage people to take the space to experiment, learn and share findings.
Start small and iterate. Just try one little task at a time.
This is solid advice for higher ed professionals who want to leverage generative AI. However, I'd add a few things:
Develop tolerance for failed smart experiments: Experiments fail; in fact, MOST experiments fail. The important thing is to make sure that the experiments are well thought out and will cause minimal harm if they fail. That brings me to my second point.
Include human review to mitigate risks: AI will do dumb stuff. That's just the nature of the underlying models. So, it's important to make sure that an actual live person reviews output before it goes into use. AI can help cut the workload of some tasks dramatically, but pushing the workload to zero may be a false savings.
Find champions and leverage grassroots diffusion: If you're a leader who wants to take advantage, one of the best things you can do is find and support enthusiastic individuals who can help champion AI use. These champions can help spread AI use through grassroots diffusion (informal spread of a technology organically).
Sometimes we can overthink things, especially when it comes to technology. Just let people play with generative AI in a safe environment and I think you'll like the results.
Using AI to prepare a panel report
Recently, I was a panelist at the Southern Association for Information Systems conference (https://communities.aisnet.org/southernusa/home), in beautiful (and I do mean beautiful) Gulf Shores, Alabama. SAIS is a great conference and has been for over 25 years. The attendees are highly engaged, and it's a friendly, supportive group.
Despite being the very last session of the conference, we had a nice crowd and they had a lot of insights to share. I'm on another panel at the Association of Business Information Systems conference in April. After that conference, I'll give you my impressions of the main insights from both panels.
My SAIS panelist colleagues and I are preparing a journal article based on the panel. We're organizing the article around the themes that emerged during the panel discussion. Generative AI tools are proving very helpful in creating this paper. Here's a quick rundown of how I've used AI (so far).
Creating a transcript: We recorded the panel using a standard digital recorder. I used Otter.ai to create a transcript of the discussion. Otter.ai is an excellent transcription tool. Producing an accurate transcript was critical to the rest of the process; I used Otter.ai because in my experience it's usually highly accurate in its transcriptions. Otter.ai also produces an outline of the transcript, which is sometimes quite handy. It also includes chat functionality, so I asked Otter to identify 5 - 10 themes from the discussion. The results were very accurate, in my opinion. By the way, Otter.ai automatically produces a summary of the transcript, which is awesome for meeting notes.
Identifying themes: Since I wanted to compare chatbots, I put the transcript into ChatGPT 4, Claude 3 Opus, and Gemini Advanced, then asked each one to identify six to eight themes. They identified similar themes, but there were some differences.
Identifying open issues: The next step was to ask the chatbots to identify some open questions/issues related to each theme. Again, the results were similar, but not identical.
Developing an agenda for the future: Next, I asked the chatbots to develop an agenda for the information systems academic community. I instructed the chatbots to structure the agenda around the themes and open issues. In other words, I wanted the AI tools to make suggestions about what we should do collectively. The results were more diverse here. ChatGPT and Claude came up with reasonable, but different agendas. Gemini kind of went off the rails and gave me an outline for a workshop. I reran everything through Gemini using a different approach and it gave me an agenda the second time. (See Integrating Gemini with Google Docs in this issue for details on my adventure with Gemini.)
Overall, I was mostly pleased with this experiment. Both ChatGPT and Claude handled the task well with minimal effort. I was especially impressed with Otter.ai. If I need to do something similar in the future, I might just use Otter's chatbot feature since the results were fine for my purposes.
Here's an important caveat, especially for more serious uses, such as preparing an article. All four of the panelists are also going through the transcript to identify themes. It's OUR work, so we need to put in the intellectual effort and exercise our collective judgment. AI is a useful tool to see if there might be other perspectives that we might have missed. It's like having an extra colleague to provide a different point of view. There is a danger of the AI tools having undo influence in the intellectual process, however. I'm purposely not doing a deep analysis of the AI outputs until AFTER I go through the transcript myself. That way, the AI perspective won't taint my own.
But, for something like meeting notes or an overview of some talk or YouTube video, I'd be fine with letting generative AI do the heavy lifting. If I'm going to put my name on something, though, you can bet I won't surrender the reins to a chatbot.
Speech in ChatGPT
Maybe you haven't noticed, but ChaGPT will now talk to you through the browser interface. This feature has been around for some time on the mobile app, but I only recently noticed it on the browser version. You can use speech to read ChatGPT's response to you by clicking on the speaker icon near the message box. (See below.)
It's a cool feature, but I haven't found a good use for it yet personally, but this could be great for visually impaired individuals and language learning. I'm writing this at about 4:30 AM, so maybe there are some good uses I'm not thinking about. If so, feel free to let me know and I might give you a shout-out in a future issue. Just email me at craig@AIGoesToCollege.com.
Resources and Tools
SlideSpeak - so close
Creating slides for presentations is one of the banes of my existence (which should tell you how great my life is). Ever since generative AI tools have allowed uploading files, I've been searching for a tool that would allow me to upload a paper or other document and then create Powerpoint slides based on that document. I've tried a number of tools such as SlidesGPT and Beautiful.ai, but none did what I wanted. There are some systems that will generate slides based on a prompt. Beautiful.ai does a decent job with this, but I don't want generative AI to come up with the content. I want to provide the content and I want AI to do the work of creating slides based on MY content.
So, I was excited when I learned about SlideSpeak (https://slidespeak.co/). SlideSpeak promises to generate a presentation based on the content you upload. It mostly lives up to this promise. To quote the SlideSpeak FAQ page, "SlideSpeak can generate presentations with AI. Simply upload a Word or PDF document and our [AI] can generate a presentation based on the content of your documents."
My initial tests were reasonably satisfying. SlideSpeak comes pretty close to reducing the effort needed to turn a document into a slide deck. After my initial test, I jumped on their offer of $229 for lifetime access to their Premium Plus plan, which is normally $24 per month. They have a very limited free plan, which should be enough to test it out. See https://slidespeak.co/pricing/ for their range of plans.
To do a real-world test, I uploaded an extended abstract that a colleague and I will be presenting at a conference in April. Once the document is uploaded, you see the document and a chat window. In the l example below, I used the chat window to ask for a bullet point summary. I just did this to try the chat function. It's based on GPT-4, I think. (I blurred the text since we haven't presented the paper yet.)
When you click on Preview Presentation, you get the following screen, which allows you to pick the length of the presentation, the language, and the "tone" (unspecified, casual, professional, funny, educational, or sales pitch). You can also tell SlideSpeak whether to add images and use your branding.
When you click on Next, you're presented with a Table of Contents for the presentation, which is essentially the outline that SlideSpeak will follow when creating the presentation. You can edit the TOC, which is nice. After you edit the TOC to your liking, you get this screen. You can choose between two stock templates, neither of which is very exciting. You do have the ability to upload custom templates, but I haven't tried this yet.
It only took SlideSpeak about 30 seconds to create a slide deck from my document, but the document was only a page long. Interestingly, I created a presentation from an old 10 page, double-column journal article and SlideSpeak only took a few seconds longer to create the presentation. Not bad.
The slides are serviceable. SlideSpeak chooses some odd images and wastes one or two slides on a Table of Contents, but these are easy to delete. It also take some liberties with content, although not in a problematic way. For example, I added "Questions?" to the TOC it suggested. SlideSpeak came up with this slide, which includes instructions for me rather than something useful for a slide. This is another easy fix. The small text size is another issue. For some reason SlideSpeak loves to include 10 point text for details. This is MUCH too small for a presentation (or for anything as I age!). Again, this is easy to fix, but the easy fixes start to pile up after awhile, which is irritating since the whole point is to gain efficiency. Once you've previewed the slide deck, you can download it as a PowerPoint file and/or regenerate it. The Powerpoint file seems perfectly normal, so it's relatively easy to make the necessary edits.
My bottom line is that SlideSpeak is OK, but not great. In a pinch, you can use SlideSpeak to create a presentation, make a few edits and have a decent slide deck in 10 minutes or so. It won't be your best work, but it might be acceptable for many purposes. I can see SlideSpeak being very useful for some higher ed professionals who have to quickly create small sets of slides for recruiting and alumni events and the like. The ability to upload templates would be a big plus for these folks.
I'm not quite ready to fully recommend SlideSpeak, although it's probably worth $19 to play around with the Premium plan for a month. You can upload 50 files per month under that plan. The Premium Plus plan is $24 per month and allows unlimited file uploads. For most users, the Premium plan is more than enough, but the Premium Plus plan allows branding, which might be important for some users. Overall, I'm not sorry that I spent the money for the lifetime plan, even if SlideSpeak only saves me a few minutes per presentation. Of course, your mileage may vary.
What I really want is a blend of SlideSpeak and Beautiful.ai, which really does create beautiful presentations that have a distinct look and feel. We'll probably get there in time. Right now, I'm using Beautiful.ai when I want a presentation to really look great. SlideSpeak will be reserved for the quick and dirty presentations when I don't have the time to invest in making the slides look amazing.
If you check out SlideSpeak, I'd love to hear about it. Just email me at craig@AIGoesToCollege.com.
Where's Craig?
On March 27, I'll be a panelist at a generative AI seminar sponsored by the Society for Information Management (https://www.simnet.org/home). The panel is called SIM ReThink Everything! Generative AI in the University Classroom. There's more information here: https://members.simnet.org/events/EventDetails.aspx?id=1843894&group=
In April, I'll be attending the Association for Business Information Systems conference where I'll be on another panel on generative AI in information systems education.