Giving AI Some Common Sense: A Q&A with Ron Brachman (Pt. 2)

By
No items found.
May 9, 2024
Share this post
Giving AI Some Common Sense: A Q&A with Ron Brachman (Pt. 2)

Does AI have common sense? Most experts say no. But can it be made to have common sense? For Ron Brachman, answering that question means going back to the drawing board, and perhaps getting a little philosophical about what common sense even means..

Brachman is the director of the Jacobs Technion-Cornell Institute at Cornell Tech, and as part of a Distinguished Lecturer Seminar hosted by the Institute for Experiential AI, he made the case that common sense is all of the following: broadly known, obvious, simple, based on experience, practical, and generally about mundane things.

Following the talk, he was kind enough to answer questions from audience members about relativism in common sense reasoning, responsibility in autonomous systems, and the limits of computer science. You can find the first part of the Q&A here.


1. Humans commonly disagree about what common sense is in a lot of situations. How can we say that humans or AI have or do not have common sense if it is not well defined?

Even if we don’t have a perfect, circumscribed definition of common sense, that doesn’t mean we can’t improve AI systems’ abilities to use it. As mentioned in the talk, current systems don’t generally use simple analogical reasoning to quickly match past experiences to the current situation (thereby guiding them in how to interpret what they see and in what to look for), they don’t do quick forward “mental” projections to determine what might happen if they take certain actions, and they don’t do quick shallow estimates of costs and benefits of competing possible decisions. These are all manifestations of what we think of as common sense, whose lack causes AI systems to make silly mistakes and sometimes disastrous blunders.

Another failure caused by lack of common sense is that of putting together factual pieces with what to humans would appear obvious inferential “glue” — this is the kind of thing the Cyc system was designed to overcome and has shown that in practice an AI system can display a certain amount of common sense, at least in answering questions. Note that while we don’t measure common sense quantitatively in humans, or look for proofs as to whether someone has it or doesn’t (or how much of it they have), we have no problem in everyday life saying that someone has a “lot” of common sense or that they clearly use it (and we certainly have no problem in pointing out situations in which others have failed to use it). We’d love someday to try to get to a more refined definition of what is common sense and what is not, but the current lack of precise definition shouldn’t stop us from trying to implement more practical, mundane forms of intelligence in machines and create systems that behave more sensibly.

2. Can common sense be captured in training data?

Humans learn common sense from interacting with the world (including each other). A lot of what we think of as common sense we acquire as children, so I imagine you can argue that common sense is developed through extensive “training” with data. We learn what’s dangerous and what to avoid (sometimes the hard way), what happens consistently as a result of a given type of action, how long things take, what people will typically do in various situations, etc., by observing “data” and doing informal experiments in the world. But we infer things through these experiences — the common sense is not laid out there explicitly in maxims and rules that we simply learn by absorption or rote memorization. So while it’s fair to say that common sense can be learned via training data, the real-world version of that data needs to be multimodal and multifaceted, and regularities take experimentation to ferret out (see the references mentioned in Question 4, especially Gopnik). So there is a sense in which common sense can be acquired through interaction with training data, but exactly how that is done, and the variety of everyday situational training data that is needed, are open questions. I do think it's pretty clear that we need more than purely linguistic data found by reading documents on the internet to provoke the learning of human-breadth common sense. A question about embodiment was raised at the end of the talk, and I think in the grand scheme of things, if an AI is going to develop common sense on its own by “growing up,” it will need to be connected by sensors and actuators to the real world.

3. How are you going to hold an autonomous system responsible? Shouldn't it be the company bringing it to market?

This kind of question needs lawyers, ethicists, and governments to help develop answers. Are parents always responsible for the behaviors of their autonomous offspring (parents being the equivalent of the companies that brought their children “to market”)? We operate as if that’s the case until a certain age, and then those children become fully responsible for their own actions. Should AI companies be responsible until their systems become “of age” (whatever that might mean) but then they are absolved of further responsibility? We’re definitely going to need to think this through as it is likely to become increasingly relevant. It does seem fair to say that if we ever focus mainly on an autonomous system as responsible for its actions, those actions will need to be backed up by understandable reasons, as we discussed in the talk.

4. Will there ever be truly autonomous AI systems?

Yes, and I think there already are. There have been and will be deep space missions with autonomous control that because of communication delays cannot be teleoperated or have humans in the loop (see this article about Deep Space 1, and this site, for example). Autonomous vehicles — while perhaps not as smart as we’d like and not endowed with what we would call common sense — are deployed (note my example of the Waymo vehicles in San Francisco) and, to my mind, are fully autonomous. Whether or not this is a good idea with the state of current AI technology is a different story. You could even argue that the thermostat that controls the HVAC system in my house is truly autonomous — it runs without any moment-to-moment intervention by me. I of course can intervene manually and change a setting or turn it off (and I can turn off the autopilot in my car), but that doesn’t mean that it isn’t autonomous while it is doing its normal function.

5. Can a simulation world help an AI model gain common sense?

I believe that if it is rich enough and has high enough fidelity to the real world, then yes. There are things that happen out in the real world that may be impossible or unlikely to be simulatable in a machine, but with a rich enough model and the kind of trial-and-error and coaching that we experience every day, a simulator could help a machine gain a substantial amount of commonsense understanding. The hard part is creating a rich enough simulation world that an AI can have the same breadth of experiences in it that we do in the real world, and the same breadth and number of things that can go wrong in the real world can go wrong for the agent in the simulated world.

6. Two counter arguments about the intersection thought experiment from your talk (about how AI struggles to use common sense reasoning in deciding whether to avoid a road when school is letting out): 1) School is letting out: the true reason behind this is still “saving time.” Can’t AI still calculate the shortest route based on the current time?

If all that were being asked was the calculation of the shortest-time route based on current known conditions, of course a system could do that — current GPS-based navigation systems already do. But in circumstances where traffic or road conditions in the near future are not yet known, current technology doesn’t have the wherewithal to (1) decide to even ask the question about when it expects the school to let out (note that that is a commonsensical projection into the future, not an observation of an observable already-crowded road; also, it depends on knowing that it's not a holiday or a half-day or the middle of the summer); or (2) quickly weigh all the qualitative costs and benefits of going one way or the other. For example, even though traffic is not currently backed up, you and I might make a quick decision because we were aware that school was about to be let out and we’ve driven through crowded streets of parents, kids, and cars before. But if it hasn’t backed up yet, my GPS or my Tesla autopilot would have no clue. On top of that, somewhat like the traffic light example, I have to decide (quickly, commonsensically) if I care more about the exact number of minutes to get to my destination or how complex or dangerous it is to navigate the crowd of people, strollers, and kids — or whether there is a different store with equally good groceries (i.e., completely change my plan on the fly). It may not even actually take more time to go by the school, but may cause me much more mental stress, which may override the elapsed time factor for me.

If I or an AI system, using common sense, decide I’d like to avoid an area that isn’t currently crowded but think will be, indeed I can tell my GPS to find a different route. But the point I was hoping to make with the talk is that I will, on the spur of the moment, based on my current and long-term goals, my mental state, my own understanding of my driving proficiency, and expectations about the future based on past experience, have to decide what questions are most important to me at the time and then in a situationally appropriate way, taking all of that into account, make a decision about what to do next. And that is beyond the current state of the art in AI.

2) For reasons like “I want to check out a beautiful tree,” I think we want AI to be a tool, not another “independent thinker” that has random personalized preference. Right?

I think there are many different views on what we want AI to be. The point of view I was taking in the talk is that in the future there will definitely be situations where we want AI systems to be autonomous and they will have to make on-the-fly decisions based on a very wide variety of factors. There will be “independent thinker” scenarios for AI in the future. As for personalized preferences, what if my command to my self-driving car was “figure out the most scenic route from here to Tamaques Park, drive there, and take photos of the most beautiful trees you find along the way”? You may think that’s silly, but as AI gets more robust and competent, and even if we limit our use of it to be as tools, there are many, many tasks we use them for that involve things we don’t think of immediately as those related to time-saving. (Further, I was not implying that the tree was something that might specifically attract an AI system but only that there are a wide variety of reasons one might take a different turn — but that whatever we do, we have reasons, usually articulable and defensible, for our everyday decisions,  and that ultimately most of them are based on common sense, and that, right now, AI does not have good, articulable and correctable reasons for doing what it does.)

Find the first round of questions and answers with Ron Brachman here. Watch his full talk here or read a recap here.