AI

Ivan Ilitch, a former Roman Catholic priest and philosopher, questioned technology’s benefits. His text, Tools for Conviviality, published in 1973, discussed the appropriate use of technology. One of his concerns was that technology might replace our ability to exercise free will. In discussing artificial intelligence in 2023, his characterization of “convivial tools” rings true in contemporary society regarding AI. He said, “Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user.”

John McCarthy of Stanford University answered the question, “What is Artificial Intelligence,” in the 2007 published paper of the same name, “What is Artificial Intelligence?” He said,” It (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” In plain terms, I would suggest artificial intelligence creates machines that emulate human thinking.

Many aspects of artificial intelligence don’t fit easily into current conceptions in contemporary society. Most of us have a limited understanding of the extent to which artificial intelligence has already penetrated our day-to-day lives. For example, digital assistants, social media, transportation, food ordering, vehicle recognition, robot vacuums, email systems, job-seeking apps, chatbots, predictive searches, entertainment recommendations, online banking and airline bookings are a few examples from a long list seemingly without end. And don’t forget Google, Siri and Alexa. Artificial intelligence is not The Jetsons that some of us remember from the early sixties regarding the future. It’s here and surrounds us.

The history of the development of artificial intelligence has been a topic of interest at IBM for almost 75 years. Alan Turing published a paper on computing machinery and intelligence in 1950, following his deciphering of the ENIGMA code during World War II. He developed a test to answer the question, “Can machines think?” The results have been debated from then until now. In 1956, John McCarthy shared the term “artificial intelligence” at a conference at Dartmouth. Since then, an IBM computer beat world chess champion Garry Kasparov, Ken Jennings and Brad Rutter at Jeopardy! and several other machines vs. people encounters where Big Blue came out on top. They are benchmarks in the field.

MIT Technology Review has identified Lerrel Pinto of New York University as one of its 2023 innovators under 35. His work is ongoing beyond conventional vacuuming robots to a “… more integral part of our lives, doing chores, doing elder care or rehabilitation—you know, just being there when we need them?” Most important in his work is creating robots that can learn. One of the challenges in such exercises is the amount of data required to make a learning robot. This surfaces in self-driving cars now being tested in several places nationwide. Based on what I see on I-27 between my house and Amarillo, almost anything would be an improvement.

One of the most remarkable aspects is that these robots can learn from their failures. As Amy Edmonson suggested in a Harvard Business Review post on organizational culture, this is not a new concept: “The wisdom of learning from failure is incontrovertible.” The latest twist is that we are not talking about people but machines. I have challenged the West Texas A&M University campus to reduce the cost of education to students by eliminating textbooks. A Washington, D.C. reporter asked me, “What happens if you fail.” I asked him what he meant. He asked what happens if textbooks are not free at West Texas A&M University in the fall of 2024. I responded, “What happens if we reduce the cost by 90%? Is that a failure or the beginning of a learning curve?”

The art of science will be transformed by artificial intelligence, according to a post this month in The Economist. In my investigations of using a few different AI platforms, I have found that each is like having a team of new graduate students conducting research. The work produced can be invaluable, but only when judged so by professional, experienced opinion. Until then, it is data without value. No wisdom. No knowledge. No insight from my perspective.

Reports are numerous regarding the use of AI by students to write term papers, essays and other educational works. Some students argue that the skills required to crack into AI platforms and utilize them to write or assist them in writing are invaluable for their futures. The students may or may not be correct. Faculty of today, especially those that require writing in coursework, which in my mind should be every course, must be conversive and knowledgeable about the impact of AI on students’ thinking, learning and writing skills.

At WT, our faculty, staff and students will become more familiar with various forms of artificial intelligence to stay abreast of the changing world. It is essential for a regional research University that aspires to currency, efficiency and value to and for the people we serve.

Walter V. Wendler is President of West Texas A&M University. His weekly columns, with hyperlinks, are available at https://walterwendler.com/.

2 thoughts on “AI

  1. A Timely Warning but Harlan Ellison said it all in 1967 with “I have no mouth, but I must scream.” https://www.youtube.com/watch?v=dgo-As552hY

    The sad thing is that most universities embrace it with cheerleader sessions on A-! while faculty now face the problem of wondering whether a student assignment is original or manufactured on CHATGPT or whatever they call it. If the Government expresses hesitation, why don’t universities do likewise?

    This is a very worrying time for higher education, but most administrators always fix on technology as the latest flavor of the month.” This time it could have dire consequences.

  2. What may or may not be obvious in a discussion of AI and it possibilities is that considering how to use AI makes us think more about people. I have recently enjoyed thinking about Wendell Berry’s Essays, “What are people for?” I think we must always be asking this. Since AI is proposed to do things that people can also do, when it is it wise to replace any kind of human activity or decision-making with a machine? I do not know that it will be a meaningful exercise to answer that question unless we know the value of people, the source of the value, and how we want to protect and honor that value.

    Can we conceive of a world where not only do machines do what people do but also mimic the value of people? Is such a thing possible? If we do not know how to define why people are valuable (presuming that most would agree that they are valuable), how can we hope to balance the advancement of AI in our lives while at the same time, honoring the uniqueness of human persons?

    Another issue along these lines is the conviviality of technology as advocated by Illich. I am intrigued by this idea of using technology “as seldom as desired.” Even if AI does not replace human thought in any important way, if we become dependent on it for much of every day life, will we lose the freedom to use it seldomly? What is a good test to know when we depend “too much” on a technology like AI? How would we know if this happened?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.