How AI features can change team dynamics

How AI features can change team dynamics

Generative artificial intelligence has Faster absorption Of any technology so far. Now, as AI applications become embedded in the culture and power of the workplace, we are beginning to see how GenAI tools will impact our business Conversational habitsWhich directs what we say and who we hear.

We're already seeing features that add AI-powered feedback to familiar tools. Zoom has an “AI Companion,” which helps you catch up when you arrive late to a meeting, and in Teams, “Copilot” will help you summarize key discussion points. Read AI (a smart AI tool that records, records, analyzes interactions and incorporates them into summaries) goes even further, measuring meeting participants' engagement, sentiment, and airtime. These applications can be integrated into the business so that users start using them seamlessly almost without even realizing it.

These tools offer productivity and feedback benefits, but there are also downsides to having them join our conversations. If people outsource listening wholesale to technology, skipping the work of thinking about key messages to themselves, meetings may be effective, but understanding and commitment to the action may be lacking.

From what we see, conversations about how to have AI-powered conversations are absent in many organizations, along with discussions about mitigating risks and maximizing benefits. The inquisitive acceptance of technology and uncritical implementation means that mistakes – when subjects or people are silenced – are not.”Smart failures“, to use Harvard Business School professor Amy Edmondson's phrase. Conversational failure is inevitable when we venture into new territory, yet they are only smart if we do our homework before experimenting, engage in thoughtful consideration while making choices and anticipating outcomes.

Based on our contract research In speaking truth to power, this article argues for the need to pay attention to how we speak when using AI and how we talk about AI. It highlights five highly interwoven areas of opportunity and potential problems arising from how technology is used in the specific context of organizational culture with its habits, embedded through perceptions of power differences, and who has the right to speak and be heard.

He who speaks is heard

We all have doubts about who gets airtime, who gets interrupted, and who interrupts meetings. AI allows us to move from doubt to fact, by providing hard data on share of voice.

With curiosity and positive intent, those who make time for airtime may have an incentive to reduce it, creating space for others. In a conversation Megan had with David Shim, CEO of Read AI, he told the story of a venture capital executive who showed data through the app that revealed he spoke for 80% of the presentation — too much, given that the goal of the meeting was to hear investment details. Likely. His increased awareness meant that he talked less and listened more in subsequent conversations.

But the reasons for talking too much or too little are complex. If you simply focus on how much to contribute as the goal of the conversation, people who don't want to talk, or who contribute by being active listeners, may drift into speaking when they don't have anything to say.

See also  It is a huge mistake to think that the economy will flourish for years

Quiet people may also be those who do not trust to record their words and make their words permanent. An AI tool can increase the risks of speaking out loud, entrenching patterns in which safer, more powerful voices dominate.

What It is said and heard

In virtual meetings, many are busy taking notes, creating their own record of what was said, and thus miss visual cues and rely on listening biases. Only then do they realize that there are multiple viewpoints on what was said and agreed upon that do not agree with what is in their notebook. Now an AI bot can take notes, produce a summary, and list action points and responsibilities, and we can turn our full attention to the people on screen – with only one shared version of what was said available to everyone.

However, topics such as failure, mental health, or criticism of strategy can get pushed underground as people silence themselves because they fear their perspective will be put on permanent public record. This might especially be the case if the AI ​​attaches subjective labels, such as “low sentiment,” which can be translated as shorthand for critical or unenthusiastic, when someone raises a particular topic. While actionable positivity is important, so is initial uncertainty, and our work often highlights situations where more skepticism and humility could have contributed positively to the quality of the conversation.

To address this issue, Read AI announces itself in the meeting and allows participants to opt out of recording and delete data before generating reports. In a world where there was no social hierarchy, this would be fine, but in workplaces full of them, the question is whether employees will feel empowered to make that choice when their boss runs the meeting?

when We talk and listen

As humans, our energy varies throughout the day and from day to day. AI can take these considerations into account, scheduling meetings when participants are most engaged.

You can read the AI ​​for example, track your engagement during the week and recommend when organizers should schedule meetings to get the best out of you. They may be told that your engagement peaks before 10 a.m. — and then declines.

Tracking engagement can help break the “domino” pattern, when one meeting is scheduled right after another. AI can force breaks we know we need but don't take between meetings, because we collectively overestimate our ability to pay attention.

AI may also make it possible to identify unnecessary meetings (by measuring engagement or analyzing action steps), helping to create spaces at work where people can stop and think, rather than remaining morbidly busy.

However, this depends on the reliability of “engagement” metrics. The problem with relying on GenAI is the misapplication of assumptions drawn from a limited set of data sets. Tracking engagement and emotions remains difficult and produces incorrect conclusions due to limited attention to cultural differences, neurodiversity, and introversion.

See also  Walmart says prices are falling — except in one key area

Looking away, pausing, frowning, or using humor (particularly sarcasm) may cause the AI ​​to conclude that you are isolated or having low emotions when in fact you may be thinking, imagining, or trying to lighten the mood.

where We talk and listen

In our global research on speaking truth to power, most of the 20,000 employees surveyed agree that they are more circumspect in formal business meetings, the ones where AI is most used. This is compared to one-on-one conversations with superiors and informal conversations with colleagues. They may become less willing to speak at formal occasions if words are recorded and shared in unknown ways.

As reported in our research, leaders are often the ones who feel most comfortable in formal meetings, and are likely to be in an “optimism bubble,” where they overestimate how accessible they are, how transparent others are, and ignore that more junior employees are only telling them About them. What they think they can bear to hear.

AI-enabled meetings may exacerbate this problem, taking conversations offline, undermining group understanding, and increasing conversational workload.

how We talk and listen

Many organizations embrace a culture of feedback, without acknowledging power dynamics. Giving and receiving feedback can often be embarrassing, if not career-limiting – not many people yell at their boss for passing emails while others talk or admit to walking out of a meeting because it's boring.

The AI ​​robot is not afraid to speak and does not feel embarrassed. Many of us will soon receive feedback (even if we don't extract it) about our virtual presence and how we speak and listen: whether we keep interrupting (and by whom); Using body language that silences others; Speaking too quickly or using exclusionary language.

This can be very valuable. However, if employees feel like they are being monitored and graded on how they communicate, “performance” behaviors may take over as meeting participants game the system (for example, using words and body language that they know get a high rating from the AI). According to David Shim, “people can't keep it up” for long, but our research suggests that employees easily learn the cultural rules of meetings, maintaining constant deception in communications.

One of the benefits that artificial intelligence can bring is how we pay attention. When someone is fully present with us, listening without judgment, we may be more willing to talk, think, and change and performance improves. Virtual meetings are often used as opportunities to multitask, undermining the productive potential of bringing people together. Reading AI measures eye movements and facial expressions to assess whether we are paying attention and interacting. This may mean that doing other work will become more difficult, or if we do, it will be visible. AI can help break our multitasking habit by helping us eliminate those unproductive meetings and encouraging us to listen better in those meetings.

See also  Nvidia shares jump 14% after sales surge at 'tipping point' for artificial intelligence

The question is whether or how participants learn how to speak and listen skillfully It seems Speaking and listening skillfully.

What determines whether AI will be for better or worse in our conversations?

Many of the positive outcomes described above assume that AI is reliable (i.e., trustworthy and accurate), will generally be used by those in power, and will be implemented in a psychologically safe workplace. There are obvious reasons to doubt that these standards will usually be the case.

The balance between the pros and cons of AI in the way we talk and listen comes down to three things:

  • The influence of power and status: How people's sense of relative power will affect their trust in AI tools, their ability to opt out of AI tracking and influence how data is used. The key is to acknowledge the culture of power that exists. The main cultural divide, based on Joyce Fletcheris whether AI is used to control others to fulfill expectations or to support others in making their own choices.
  • What we consider knowledge: Whether we hand over the responsibility of listening and understanding each other to AI or use AI to enhance our ability to confront each other in general. If we do the former, and rely too much on AI for the source of our knowledge, we may lose the muscles we need to learn, train, listen and speak, and thus become governed by the intentions of the technology suppliers and data sets from which the AI ​​learns. Then he created (even hallucinations).
  • Stop to learn: Whether we are creating spaces for learning as we implement new technology and making wise choices about its adoption. AI's “productivity gains”, within our current work philosophy, can lead to more busyness, more tasks, and even less downtime—and saving time within a philosophy that favors attention and relationship makes it possible to create deeper connections of trust and reflective thinking. There are implications for what is said and who is heard in both scenarios.

The wise application of AI invites us to a philosophical vision that forces us to confront our never-ending quest to do more, faster. It requires self-control and the ability to lift our instrumental gaze, hypnotized by rationality and goals, and engage our relational gaze, leading us to see how deeply connected we are to each other, to the world around us and how our choices now, frame the world we and our children will occupy. In this way, perhaps AI can help us have important conversations.

Leave a Reply

Your email address will not be published. Required fields are marked *