LIB 4900: Sociotechnical Analysis of Artificial Intelligence - Prof. Francoeur

March 1 (Friday)

Due Today

  • Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2021, pp. 610–23. ACM Digital Library, https://doi.org/10.1145/3442188.3445922.

Announcements

  • Revised syllabus with new policy on AI use
  • Please submit in-class activities #5 and #6 using the safer sharing method (demo in class using video shared on Blackboard)
  • Glossary assignment due on Friday, March 8

Intro to New Unit: Information Quality

Review of topics we'll cover in next month:

  • training data
  • algorithmic bias
  • low-quality information: spam, misinformation, hallucination
  • harmful information: disinformation, propaganda, hate speech, deepfakes

Reading for Monday (Feb. 4)

Pairs of students will select one article from the list below that they will need to summarize in class on Monday. The summary need not be longer than two minutes.

Discussion of "On the Dangers of Stochastic Parrots"

March 4 (Monday)

Algorithmic Bias

  • Team presentations
    • Each team will meet for five minutes to discuss their two-minute summary of the article they read for today
    • Two-minute presentations
  • Slides on algorithmic bias

March 8 (Friday)

Due Today

  • Glossary assignment

"How to Make Computers Less Biased"

 

Activity

  • Pretend that a company is interested in designing an AI system to be sold to Baruch College that can be used to grade student writing (assignments, exams, etc.) The class will be divided into three groups who will all need to understand what the other groups are being asked to do.
    • The Design Team will sketch out a plan for building an AI to grade student writing at Baruch by filling out a worksheet asking the team to answer things such as:
      • Where will you get your training data from?
      • What kinds of tests can you run as you build the model to ensure that it will do what you want it to do?
      • What exactly will be the output from the system? A letter grade? A detailed set of grades from a standardized rubric? Will the output include a written evaluation of the student work?
      • Describe what algorithmic bias might look like in this system? How might it grade unfairly?
      • What will be the name of your AI system that you’ll be selling to Baruch?
      • Worksheet for the Design Team to complete
    • The Students have heard this project is being seriously considered for Baruch and are now thinking through the implications.  On a worksheet for their team, they will put together a list of all the pros and cons of such a system. In those pros and cons, they will consider not just their own values of students but they will also imagine how the values of the designers might come into play as they build the system. What are the values that administrators and faculty have that might lead them to accept or reject such a system?
    • The Faculty and Administrators. The faculty have heard that the administrators are seriously considering such a system. Both faculty and administrators form a task force to consider the pros and cons much like the students are. They too will consider the values of the designers as well their own values and those of the students as they come up with a list of pros and cons for the deployment of such a system at Baruch.

For Monday (Mar. 11)

Read this article:

Vincent, James. “AI Is Killing the Old Web, and the New Web Struggles to Be Born.” The Verge, 26 June 2023, https://www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web

March 11 (Monday)

Due Today

AI and the Info Ecosystem

Discussion of “AI Is Killing the Old Web, and the New Web Struggles to Be Born.”

  • Class is broken up into two-person teams who complete this worksheet about the article.
  • Class debate. Class is divided into two groups that will argue whether the web will be seriously damaged by the flood of AI content.

For Friday (March 15)

Review the study guide for the midterm exam.

 

March 15 (Friday)

Midterm Exam

March 18 (Monday)

Misinformation, Disinformation, and Malinformation

Misinformation: “information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.”

Disinformation: “false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological or social motivations.”

Malinformation: “accurate information shared publicly to cause harm”

All quoted definitions come from this source: Ordway, Denise-Marie, et al. “Information Disorder: The Essential Glossary.” The Journalist’s Resource, 23 July 2018, https://journalistsresource.org/politics-and-government/information-disorder-glossary-fake-news/.

  • Using this online form, each student will categorize examples or kinds of information that might be considered misinformation, disinformation, or malinformation.
  • Discussion of how we categorized the examples or kinds.

In-class activity #7

  • Will be due as homework on Friday, March 22
  • Read "Factsheet 4: Types of Misinformation and Disinformation" (PDF) which was originally published by the United Nations High Commission on Refugees in 2021 as a larger publication titled Using Social Media in Community-Based Protection
  • Read Ordway, Denise-Marie, et al. “Information Disorder: The Essential Glossary.” The Journalist’s Resource, 23 July 2018, https://journalistsresource.org/politics-and-government/information-disorder-glossary-fake-news/
  • For each of the three categories we discussed today, find a news article or a journal article that gives a specific example of a situation or case involving that category.
  • On a Word document that you'll turn in, provide a MLA-style citation for each article. Below each article citation, write 3-4 sentences that identifies which category this exemplies and explains the basics of the story and how it represents an example of that category.

 

March 22 (Friday)

Due today

  • In-class activity #7 (assigned Monday, March 18)

Announcements

  • Overview of glossary assignment
  • Consultations with Prof. Francoeur beginning March 25
  • No in-person class on April 8 but there will be homework assigned; asynchronous class instead.

Hallucinations in AI

Activity

  • In this shared Word document, each student will search for articles and web pages that help answer these questions:
    • What does hallucination by a LLM look like (what are example of it or categories of it)?
    • Why do LLMs hallucinate?
    • How can we minimize LLM hallucination now or in the future?
  • Each person should be prepared to discuss the articles or web pages they found.

Discussion

  • Are hallucinations in LLMs more commonly cases of misinformation, disinformation, or malinformation?

For Monday (March 25)

Read

March 25 (Monday)

Due Today

Read

Announcements

  • No class on Friday, March 29
  • Begin scheduling your consultation appointment with Prof. Francoeur any time between today and April 26
  • Class on April 8 will not meet in person in our usual classroom and instead will be an asynchronous online day

Transparency

 

For Monday (April 1)

  • In-class activity #8: In no more than a page, describe an AI system that you don't 100% trust yet, why don't you trust it, and what would it take to make you trust it more. Share as a Word document to Prof. Francoeur.