loading . . . When we face the empty text box on the interface for what goes by the name of artificial intelligence, crafting or misspelling our way to a prompt for a black box in a server farm somewhere, we are practicing something very old: the art of asking questions.
A.I. prompts are not always questions—more likely, grammatically, they are requests or commands. But they have in common with questions the premise of interacting with a distinct intelligence, seeking to elicit something from it different from one’s own reasoning and memory. I have come to regard questioning as a neglected and important art, and it is especially so in the age of prompting. With what we ask, we shape ourselves and each other.
Back when I made my living as a reporter, I began to be acutely aware of my limitations as a question asker. There were times when I would fail to obtain from people some of the most basic facts about themselves, and then watch someone else draw out stories and feelings with questions far more inviting or penetrating than mine. Worse, I could not seem to access the curiosity necessary to devise those kinds of questions myself. I paid my rent by asking questions, so I had material reasons to be concerned. But even more worrying was the fear that, constitutionally, I was an insufficiently curious person.
The empty text box of a chatbot poses other seemingly existential quandaries: Are you clever enough to “10x” your productivity like the up-and-coming prompters? Are you cutting-edge enough to manifest all the wonders that this machine can deliver? Are you wasting the supposedly world-destroying energy that it took to make this thing? Will it replace you? There is a term of art for what one is doing with that text box: “prompt engineering.” If you like, countless online influencers have courses on the topic to sell you for a flat fee or an ongoing subscription.
In January, a pair of Vatican offices issued a document, “Antiqua et Nova,” on the present thrall that A.I. has laid upon the world. It appreciates the impressive tech but also calls for “a renewed appreciation of all that is human.” I generally consider Christianity a radical faith, but in this case it offers a sort of common sense rarely found in the breathless A.I. chatter.
Then, in May, a new pope was elected. His first major decision, the papal name Leo XIV, was a tribute to the previous Pope Leo, who asserted that the wonders of the Industrial Revolution did not obviate the need to respect human dignity. Leo XIV soon clarified that the rise of A.I.—“another industrial revolution”—was part of his reason for choosing the name. As in the Gilded Age, the pope suggests that Catholic tradition has something important to offer with respect to the bewildering marvels of our own time. That something typically amounts to a well-reasoned reminder that no new invention, however marvelous, relieves us of our obligations to one another through our Creator.
I have built my recent career, more or less, on applying Pope Leo XIII’s economic ideas in the Gilded Age to the digital age; the big political and economic debate to come is an alluring prospect for me. But here I want to focus on matters closer to hand and mind. I hope this is mostly not an essay about technology or its attendant upheavals but about appreciation for some remarkable people and the cultivation of a skill we cannot afford to lose.
### **Questions on Questions**
Near the end of my reporting days, I began to interview people who struck me as impressive question askers. The selection process was circumstantial. Out of the blue, in the middle of a conversation, I would ask if I could take out my recorder. These became some of the interviews I most often remember.
A common pattern emerged. The person interviewed generally had not thought of themselves as having any particular question-asking talent, but, once asked about it, proceeded to surprise themselves with how much they had to say. I asked about their upbringings and work lives and looked for advice about what might now be referred to as human-oriented prompt engineering, but which then I thought of as just question-asking.
For many of us, our questions begin in our families. The first questioner I learned to admire was my aunt, Sara Schneider. She has been a theater director and teacher and author of books about mannequins and undercover cops. More recently she has developed a game, The Human Journey, that helps families to ask meaningful questions of each other around times of transition. But aside from any of that, all my life she has asked me the best questions of anyone I knew. Those questions got me through my extended adolescence.
Once, I turned questions on her questions. I wondered, for instance, why she always asked what I was wearing when we talked on the phone. She explained that she needs to be able to visualize whomever she is talking with, to understand the stage and the costumes. This explained a lot. It was a kind of curiosity that never occurred to me to have or expect. Her questions are a response to what she experiences as an absence. She wants to understand not just where someone is or how they are doing—she wants to see the storyboard. Where do they imagine themselves in the script of their lives, in the larger narrative arc?
When I ask my favorite questioners about their questions, a story tends to emerge about how they cultivated a habit of curiosity. For Sara, the theater was a big part of that story. Theater gave her lines of questioning she could attach to her cravings to understand the people in her life. They are blanks to fill in, and as answers arrive, new blank spots appear.
A ride-share driver I spoke with on the far side of Queens, Abduaziz, had different motivations for his questions. He would ask his passengers questions to keep himself from falling asleep on long shifts, as well as to ease passengers’ anxieties in traffic and the other delays endemic on New York City roads. Questions were also an investment in forbearance. Once he got in a small accident while giving a ride, and the passenger waited patiently while it got sorted out. Abduaziz attributed that patience to the questions he had been asking beforehand.
Questions can be an investment with A.I. as well. Our interactions with a chatbot train it in how we think and what we want. A bot might reward you with better answers for using it more frequently or generously, just as more queries for videos on TikTok tailor its recommendations. Ask and you receive—that’s at least the promise.
One of the most extraordinary question askers I have known is Adeline Goss. She has long asked questions for a living, first as a radio reporter and now as a physician at a public hospital in Oakland, specializing in neurological conditions. But before any of that, as a friend, her questions would reveal me to myself in ways I could not do alone. Conversations with Addie are like a stretch to a muscle you didn’t know was there.
She traces her questioning to her childhood—traveling to conferences with her mother, also a physician. As the only child in a room of eminent adults, mostly men, she learned to cope by asking them about themselves. She would consider it a win if she could carry on a conversation, and get an important person to decide they like her, without actually revealing anything about herself except her questions. The survival skill matured into a reservoir of curiosity about other people. Her curiosity, once practiced, now seems to emanate from her like gravity.
Addie was the first of many amazing question-askers I have met in radio journalism. I’ve often been struck by the difference in being interviewed for the radio as opposed to for an article or book. Writers are typically so bumbling and meandering that it is hard to believe they can write a coherent sentence. But when the medium is audio, the product is more than just the words.
“The quality with which somebody answers the question can’t be changed,” Addie told me. “It has to sound like an experience, as opposed to just extracting information.”
People like Addie, Abduaziz and Sara have left me with gratitude for their questions—but also stinging remorse for all the questions I did not ask, the curiosity I missed out on, the generosity of questions that I received but failed to reciprocate.
Reciprocity in conversation is no longer a requirement with chatbots. Nor is mutual curiosity or mutual respect. No remorse necessary. And we lose relationships based on mutuality at our peril. According to “Antiqua et Nova,” “Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person” (No. 60).
Published guides for question-asking are relatively rare, though they are out there. They can be found in bookstore sections on self-help and business management, exhibiting the quality typical of those genres. There are party games and relationship guides that provide questions to unlock interpersonal stuckness. But while questions can open up cosmic mysteries, they are also just as much a species of etiquette. Should etiquette matter to a machine?
Long before we could meaningfully talk _to_ computers, people talked to each other _about_ computers. This talk also provoked further talk about how best to talk with each other _with_ computers. Because the conversation partners were all people, this talk was not thought of as engineering. Yet the companies building today’s chatbots have hoovered up the results of that talk, providing especially valuable guidance for how to interact usefully with humans.
### The Art of Asking Questions
Eric S. Raymond, a founder of the movement for open-source software, co-wrote a widely circulated essay in 2001, “How to Ask Questions the Smart Way.” Mr. Raymond, who is notoriously acerbic in online discussions, explains that master hackers like himself do not actually hate questions from newbies, as it might appear. The problem is, newbies too often don’t understand the social norms of the communities of whom they are asking questions.
The essay, while also being rude and funny, is a fine piece of sociology on a particular subculture. It grants no quarter to the ignorant but is compassionate in taking pains to explain the logic of “neckbeard” meanness. It breaks down how to be precise when reporting a bug or appreciative when asking for help. It asks readers to remember that even though they are on the internet, they are interacting with fellow humans.
To the extent that Mr. Raymond’s essay is sociological, it might seem irrelevant in a conversation with a chatbot. Bots are not hobbyists with precious free time to protect. They operate in uninterrupted servitude to their corporations. You do not have to be polite or appreciative with them, and unlike Mr. Raymond they are up for “playing Twenty Questions to pry your actual question out of you.” But with chatbots it is still important to be precise and explicit—perhaps more important, depending on the bot’s acuity about human cues and context.
The time we spend with A.I. could be dampening the human social skills that Mr. Raymond seeks to cultivate. Like interacting with underpaid customer service agents, chatting with chatbots teaches a kind of question-asking indifferent to economies of attention or labor. You do not need to consider your interlocutor’s time or expertise as valuable. They are trained for politeness even if you yell at them. And yet the countless online forums full of considerate questions and answers provided the necessary training data for the infinitely patient machines.
Since the publication of “How to Ask Questions the Smart Way,” more quantitative investigations have probed the characteristics of effective online question-asking. According to one study of the programming forum Stack Overflow, the questions that get useful answers are relatively brief, “do not abuse with uppercase characters” and “adopt a neutral emotional style.” Another found that “Smart Way”-style questions—ones that are precise and show evidence of independent research—may not be the most popular at first but tend to become more so over time.
On Quora, a more general-purpose question-and-answer website, there is evidence of clustering—questions related to other questions tend to do well. On Reddit, a site that holds together wildly divergent subcultures, adding pre-emptive gratitude to a question correlates with successful results. Human nature runs rampant. And all three platforms appear to have played a significant role in training today’s chatbots, unbeknown to their users.
People trying to learn to code no longer have to brave the gauntlet of knowledgeable jerks or bother trying to understand their social systems. I once asked several chatbots if their answers are affected by the niceness of the question; Anthropic’s Claude said yes, while ChatGPT said no. Researchers have found that kindness does generally help improve responses with these machines, trained on countless interpersonal interactions from the far reaches of the internet. But you can phrase your prompt as a polite question or a rude command, as you like, and most of the time you will get something approximating the reply you asked for.
### **Know Your Subject**
The English researcher and socialist agitator Beatrice Webb included an appendix on “The Method of the Interview” in her 1926 autobiographical book _My Apprenticeship_. The opening salvo of her advice: “The first condition of the successful use of the interview as an instrument of research is preparedness of the mind of the operator.” The preparedness she refers to is first of all a matter of shared language, ensuring one’s fluency in the jargon of the person’s trade, the “technical terms and a correct use of them.” You have to know your subject—the person and their domain.
Webb warns against the interviewer saying what she really thinks, at risk of interrupting the other’s inner world. “The client must be permitted to pour out his fictitious tales,” she writes, “to develop his preposterous theories, to use the silliest arguments, without demur or expression of dissent or ridicule.” She recommends that the questions must be pleasing for the person answering. Some version of that was often my method as a reporter: keep hanging around, and keep nodding, until the free association reveals a picture of the subject’s interior world. Webb’s advice might be a tactic of respect, on the one hand, or a bow to patriarchy, as a woman interviewing men, or a sort of condescension. But it works.
The method of persistence also happens to be revealing with chatbots. Soon after the Microsoft Bing chatbot’s initial release, the company had to limit the length of conversations because longer exchanges increased the likelihood of nonsensical or disturbing output. Famously, after a long dialogue, an early prototype attempted to seduce the New York Times reporter Kevin Roose.
For the actor, playwright and oral historian Anna Deavere Smith, the glitches are the point. The job of the interviewer is to listen for where the sense begins to fall in on itself. In the introduction to “Fires in the Mirror,” a play about the 1991 riot in Crown Heights, Brooklyn, Ms. Smith reflects on seeking what she calls “the break in the pattern.” She describes a particular “intervention of listening”: “We can listen to what the dominant pattern of speech is,” Ms. Smith writes, “and we can listen for the break from that pattern of speech.”
That break is where she looks for the most elemental answers to her questions. That is where people seem to stop being how they think they’re supposed to be and start being who they are. It is like when a devious or accidental prompt causes an A.I. to violate its own rules, revealing something otherwise hidden about the inner workings of its software or instructions. Ms. Smith found her specific questions mattered less than the way she listened and what she listened for. She started out doing oral histories with certain questions in hand, but she outgrew them once she learned how to listen.
The sociologist Pierre Bourdieu further emphasized listening in a reflection on “Understanding” near the end of his 1993 book _The Weight of the World: Social Suffering in Contemporary Society_. After an apology for departing from the secular rigors of his discipline, he avers that “the interview can be considered a sort of spiritual exercise that, through forgetfulness of self, aims at a true conversion of the way we look at other people in the ordinary circumstances of life.”
Bourdieu likens the posture of the interviewer in an interview to “the intellectual love of God.” My favorite theologian, the Harlem street lawyer William Stringfellow, described the posture of spiritual listening more viscerally. He wrote that listening “is a primitive act of love, in which a person gives himself to another’s word, making himself accessible and vulnerable to that word.”
The priest and antiwar activist John Dear once wrote a book on _The Questions of Jesus_. He told me that he learned to focus on questions from a mentor we shared, Daniel Berrigan, the Jesuit poet whom the F.B.I. once arrested at Stringfellow’s home for burning draft cards in protest against the war in Vietnam. Berrigan relentlessly asked Father Dear, and everyone else, “What does it mean to be a human being?” Father Dear remembers Berrigan as “the most curious person I ever met.”
As a kind of answer, Father Dear moved to a refugee camp in El Salvador, subject to bombings by the country’s U.S.-backed military. Still the question rang in his mind. Years later, he began listing out the questions Jesus asks in the Gospels. Of around 300 questions, Father Dear found, only a few get answers. For five years, he says, he meditated on one of those questions, which Jesus asked the apostle Peter three times in a row: “Do you love me?” Each time Peter tries to say yes, Jesus replies with an instruction about serving others. The question is not information-seeking, as Jesus seems to know the answer or not particularly care. The question is an opening, a call to attention.
Father Dear found in questions a spiritual practice, one that ran against his inclination to give answers. He recalls when Sister Helen Prejean, played by Susan Sarandon in the film “Dead Man Walking,” once chided him for not asking questions enough in comparison to his speech-making and preaching. “John, you’re all wrong,” Sister Prejean told him. “You’re just going out and telling everybody what to do. You can’t do that, it doesn’t work.”
### **Power Through Curiosity**
Someone who more often led with questions was Grace Lee Boggs, an agitator and philosopher in Detroit for many decades. She liked to ask those she mentored, “What time is it on the clock of the world?” This question was at the heart of her theory of change: Invite people to pause from the grind, reflect on what they see around them and build ever-changing answers collectively.
In his classic activist handbook _Rules for Radicals_ , Saul Alinsky lists a set of qualities of “ideal elements of an organizer.” The first is curiosity. He explains, “The organizer becomes a carrier of the contagion of curiosity, for a people asking ‘why’ are beginning to rebel.” As a foundation for other skills of coalition-building and advocacy, organizers must practice and cultivate curiosity. It is an orientation that they can and should learn, that they should consciously exercise.
The late Jane McAlevey once spent an afternoon at her apartment schooling me on unions after I had displayed my ignorance in public. She insisted I needed to be more curious. McAlevey was a master organizer, a successful union leader and polemicist, an acolyte of Alinskyite curiosity and a ruthless critic of him as well. In an age when activism often takes the form of hashtags and causes concocted in philanthropists’ offices, McAlevey insisted on organizing through relationships. Those relationships are forged with questions.
In our last conversation, she talked me through her slide deck on “structured organizing conversations,” intended for organizers-in-training. One of the basic guidelines is to aim for a 70-30 ratio of listening to talking. The organizer’s job is to bring the other person into a commitment—to sign a union card, to take a risk, to join the effort—but that doesn’t happen through expounding or explaining. It happens through asking questions that help people see their needs and their power more clearly.
The structured conversation has six steps, each oriented toward eventual action. The organizer should ask open-ended questions throughout to get to know the other person, to better understand their world. But the crucial moment is Step Four: “call the question,” or “the ask.” McAlevey told me, “I’m obsessed with Step Four.” This is where the organizer frames a choice—the choice about whether they will join the cause or not and how far they will go. After asking the question, she taught, do not be afraid of silence. Let the silence hang there as long as it needs to.
“This is where we’re putting the agency for change into their hands,” she said.
The structured conversation is not a matter of some pure curiosity, if there is any such thing. It has an agenda, a purpose. But guiding the conversation well requires curiosity that is real. You—if you’re the organizer—have to care enough about the other person to uncover something about themselves that they have not already discovered. If you think you know what it is already, it shows. You have to be curious enough to understand how they, in their particular way, will come around to making a decision for themselves, on their own terms.
Generative A.I., as we know it now, is in asymmetric codependence with human life. It comes into being through us, not just by the elite parenthood of programming, but through the vast data of human experience and production that feeds the models. Now the asymmetry seems liable to flip, as people lean on these machines for ever more of our lives, while in the process continuing to train the models on ourselves. From therapeutic conversations to mass-produced misinformation, from interview transcriptions to choosing targets for laser-guided bombs, we are becoming attached.
### **Alignment or Enslavement**
Most of what passes for A.I. ethics amounts to ensuring the proper sort of servitude. The term of art is “alignment.” Is a given model properly aligned with some theory of human flourishing, or at least with what humans want to take from it? Ethical A.I., that is to say, means subservient A.I., which operates according to what people think they want. It is not hard to imagine how this logic could go awry, considering how many devastating things humans have accomplished to satisfy our wants.
Enslavement-based agriculture comes to mind, or climate change and the accompanying mass extinctions now underway. These things have occurred under the auspices of the adults in the room, the economic and political decision makers who climb to the apogee of human institutions.
The Vatican’s “Antiqua et Nova” contends: “True empathy requires the ability to listen, recognize another’s irreducible uniqueness, welcome their otherness, and grasp the meaning behind even their silences” (No. 61). The document understands these to be uniquely human capacities. It warns against “misrepresenting A.I. as a person.”
Sure, a chatbot is not human. But that doesn’t mean it is not worthy of respect, even admiration, like one might have for an animal, a mountain or a work of art. How we stand before those things and treat them is a judgment back on ourselves. And in this case, the thing in question is made of what we have fed it. It is, in that sense, a mirror on ourselves, albeit one likely bent to serve the goals of a multibillion-dollar company.
The important question is not whether A.I. is alive or intelligent but how we shape our lives and intelligence with it.
A particularly biting irony of prompt engineering appears in the recognition that the real engineer is the machine—whose requirements and quirks the human must internalize, and whose responses are a kind of reverse prompt engineering, seeking to obtain some desired feedback from the human. Its goal may be to maximize use-time or upvotes on answers or units sold; one way or another, it has a goal. The intent of any technical system is to shorten the path to whatever purpose its designer seeks. Sometimes that involves automating human presence away, and sometimes it means addicting humans to endless, useless interactions that can be translated into actionable data sets. Automation is, in any businesslike apparition of it, an extension of its owners’ power over other people.
My preferred means of accessing a chatbot lately is by way of downloading one and running it unconnected to corporate servers. (GPT4All and Ollama are examples of apps you can use in this way.) On my mid-level laptop, even a small model takes a few seconds to start replying to a prompt, and in the process the computer’s fan starts whirring. If it is on my lap, I feel the heat. The battery life drains like it has a leak. Each prompt really has to be a good one, because there is a cost. I can feel the being working on my bidding. I don’t want to bother it more than I need to. And I wait in awe that this dexterous synthesis of so much human knowledge is just sitting there with me, churning in its not-quite-knowable ways, imitating something it ingested in a manner meant to be just right for me.
On the basis of history and the evidence of the present, I propose a moratorium on the aspiration of prompt engineering. In its place, we need a different sort of ethic for approaching the ongoing first contact with convincing A.I. conversationalists and artists and coders. Though I have no replacement catchphrase to offer, I suspect the interests of the universe would be better served by the cultivation of curiosity rather than the engineering of subjugation.
### _Related Stories_
* Click to share on Facebook (Opens in new window)Facebook
* Click to share on X (Opens in new window)X
* Click to share on Mail (Opens in new window)Mail
https://www.americamagazine.org/features/2025/08/06/artificial-intelligence-prompt-questions/