{"id":132,"date":"2025-10-20T07:50:59","date_gmt":"2025-10-20T07:50:59","guid":{"rendered":"https:\/\/bradleymonk.com\/wp\/?p=132"},"modified":"2025-10-20T07:50:59","modified_gmt":"2025-10-20T07:50:59","slug":"when-ai-never-sleeps-the-problem-of-social-tempo-in-human-ai-teams","status":"publish","type":"post","link":"https:\/\/bradleymonk.com\/wp\/?p=132","title":{"rendered":"When AI Never Sleeps: The Problem of Social-Tempo in Human-AI Teams"},"content":{"rendered":"\n<p>AI agents are about to become teammates. Not \u201ctools,\u201d not \u201cassistants,\u201d but actual actors in the decision chain \u2014 gathering data, reasoning over it, and making or recommending moves in live workflows. They\u2019ll sit beside analysts, operators, designers, and scientists, feeding information, asking for clarification, and negotiating shared goals.<\/p>\n\n\n\n<p>That vision sounds thrilling. Until you remember one small problem.<br>AI agents don\u2019t live in <em>time<\/em> the way we do.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Different Clocks, Same Team<\/strong><\/h2>\n\n\n\n<p>Humans work in embodied time.<br>We think in minutes and hours, not nanoseconds.<br>We send an email, grab coffee, task-switch, and wait for a reply. Waiting isn\u2019t wasted; it\u2019s a natural pacing mechanism that lets our attention, memory, and emotion catch up to the world.<\/p>\n\n\n\n<p>AI agents, on the other hand, inhabit computational time. Their \u201cseconds\u201d are microseconds, and their concept of patience is undefined. Give an AI agent the ability to request information from human teammates and it may <em>quite literally<\/em> ask for updates faster than you can blink.<\/p>\n\n\n\n<p>Without constraints, a team of agents could spam thousands of emails or chat messages a minute, each politely asking for clarification or more data. That\u2019s not collaboration. That\u2019s denial-of-service by overenthusiasm.<\/p>\n\n\n\n<p>If we want human\u2013AI teams to function, we\u2019ll have to engineer something most AI systems have never needed before: <strong>a sense of time.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Missing Concept: Temporal Cognition<\/strong><\/h2>\n\n\n\n<p>Time, for humans, is more than a clock. It\u2019s a cognitive framework.<br>Most adults are relatively good at both tracking time and <em>predicting<\/em> it. We know that an email sent on Friday afternoon probably won\u2019t get a reply until Monday. We know that a colleague who hasn\u2019t answered by Wednesday might be out of town. We use those intervals to gauge not only progress but intent.<\/p>\n\n\n\n<p>Now imagine an AI agent working in the same office.<br>It fires off a query: \u201cHi Jim, could you send me the latest metrics for Project X?\u201d<br><strong>Then what?<\/strong><br>Does the agent simply wait? Does it poll Jim\u2019s inbox every 10 seconds?<br>Does it conclude that Jim is unresponsive and escalate to his manager?<br>Does it keep working trawling the web, scraping internal databases, synthesizing speculative reports all while racking up compute time and energy costs on an issue that would have been resolved by Jim\u2019s two-line email Monday morning?<\/p>\n\n\n\n<p>Humans have centuries of cultural evolution embedded in how we <em>wait<\/em>.<br>AI agents have none.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Empty Interval Problem<\/strong><\/h2>\n\n\n\n<p>The moment between a request and a reply \u2014 what cognitive scientists might call <em>the empty interval<\/em> \u2014 is where human-AI teams will either sync or fracture.<\/p>\n\n\n\n<p>In that gap, humans multitask, prioritize, and maintain a mental model of \u201cwhat\u2019s cooking.\u201d We might not consciously track the passing seconds, but we <em>feel<\/em> when it\u2019s time to follow up. We intuit social latency norms: when to nudge, when to drop it, when to worry.<\/p>\n\n\n\n<p>An AI agent, left ungoverned, has no such rhythm. It experiences the waiting interval as infinite potential compute time \u2014 an invitation to iterate endlessly. In effect, it fills the void with activity, not patience.<\/p>\n\n\n\n<p>That might sound productive, but it\u2019s a disaster for coordination. The agent\u2019s world model drifts faster than the human team\u2019s ability to update it. By the time the human responds, the AI may have moved on, built new assumptions, or even invalidated its earlier request.<\/p>\n\n\n\n<p>The mismatch becomes not just temporal but epistemic \u2014 two teammates working on different timelines, and therefore, different realities.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Cost, Trust, and the Tempo of Inquiry<\/strong><\/h2>\n\n\n\n<p>Even before you add humans, multi-agent systems already struggle with when and how to ask each other for information. Every query costs bandwidth and compute time. In a hybrid human\u2013AI team, those costs become cognitive and social too.<\/p>\n\n\n\n<p>If no cost is imposed on an AI\u2019s queries, the system will learn to ask for everything, all the time.<br>If the cost is too high, it may go silent \u2014 hoarding uncertainty rather than distributing it.<\/p>\n\n\n\n<p>The right balance depends on something human factors researchers call <em>trust calibration<\/em>: knowing when to rely, when to verify, and when to defer. But now that calibration must run both ways. The human has to trust the agent\u2019s restraint, and the agent has to trust that the human will eventually respond \u2014 at a human tempo.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Designing Patience<\/strong><\/h2>\n\n\n\n<p>We talk a lot about \u201calignment\u201d in AI, but rarely about <em>temporal alignment<\/em>.<br>It\u2019s not enough to align goals; we have to align clocks.<\/p>\n\n\n\n<p>That means designing agents that can:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Model expected human response times.<\/strong> Learn empirical latency patterns across teammates and contexts.<\/li>\n\n\n\n<li><strong>Estimate opportunity costs.<\/strong> Quantify when waiting is cheaper than acting \u2014 especially when compute, power, or data retrieval have real-world costs.<\/li>\n\n\n\n<li><strong>Throttle communication rates.<\/strong> Adopt \u201ctemporal etiquette\u201d \u2014 social pacing rules that prevent spamming and respect human bandwidth.<\/li>\n\n\n\n<li><strong>Signal temporal state.<\/strong> Express what they\u2019re waiting for, how long they expect to wait, and what they\u2019ll do if the delay exceeds a threshold.<\/li>\n<\/ol>\n\n\n\n<p>In short, agents need to learn the same thing humans do in every workplace: <strong>how to wait without stalling.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>From Clock Speed to Team Tempo<\/strong><\/h2>\n\n\n\n<p>Humans and AIs don\u2019t just differ in how fast they think \u2014 they differ in <em>what time means<\/em> to them.<br>For humans, time is linear, embodied, and emotionally textured.<br>For AIs, time is a scheduling parameter.<\/p>\n\n\n\n<p>Bringing those two chronologies into harmony is not a UX problem or an optimization problem. It\u2019s a new domain of <em>temporal human factors<\/em>: how to manage coordination across cognitive systems that literally live in different temporal realities.<\/p>\n\n\n\n<p>The big open problem: Aligning AI temporal workflows for human-AI teaming is critical before onboarding AI agents with any autonomy over their goal-based tasking.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI agents are about to become teammates. Not \u201ctools,\u201d not \u201cassistants,\u201d but actual actors in the decision chain \u2014 gathering data, reasoning over it, and&#8230;<\/p>\n","protected":false},"author":1,"featured_media":133,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_editorskit_title_hidden":false,"_editorskit_reading_time":0,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","footnotes":""},"categories":[1],"tags":[],"class_list":["post-132","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","wpcat-1-id"],"_links":{"self":[{"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/posts\/132","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=132"}],"version-history":[{"count":1,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/posts\/132\/revisions"}],"predecessor-version":[{"id":134,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/posts\/132\/revisions\/134"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=\/wp\/v2\/media\/133"}],"wp:attachment":[{"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=132"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=132"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bradleymonk.com\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=132"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}