I built a multi-agent project, for users to ask questions about their AWS infrastructure (3 AWS accounts managed by AWS Organizations) and get answers in human readable way.
The system connects to users AWS infrastructure and provide the answer by reading various log types and creating API calls to multiple AWS resources.
Project repo
Part 1: I built a multi-agent project on AWS, with Strands AI and AgentCore
Part 2: Give 'em something to read! Building a data pipeline for your agentic AI project
Part 3: Make 'em safe! Security for your agentic AI project
Part 4: Make 'em remember! Memory in the agentic AI project
Part 5: Make 'em visible! See what is happening inside your agentic workflow
Part 6: When shebangs party hard with your MAC path on OpenTelemetry
Part 7: Make 'em behave! Don't let your AI agents hallucinate
This one is a story about how I literally lost 2 days of my life and I am still not sure what actually happened.
This situation is so weird (and funny) that it required separate article.
Fat fingers syndrome
So while I was playing with the agents I accidentally deleted .bedrock_agentcore/ directory and before I realized what happened it was already gone from the trash as well.
For your information, that's the hidden directory of a local cache that AgentCore creates. When it comes to deploying the agents to AgentCore runtime - the content of that directory is literally all you got.
How (and WHY!!!) would someone delete that?

I bet one of the reasons why AWS hides it is that you should not mess with it.

Good news: it is re-created in next agentcore-deploy.
Bad news: it is re-created in next agentcore-deploy.
Confusing? Oh, I hear you!
Anyway, I was able to fix it (my life minus two days) and now I am going to recreate it again.

The Prerequisites
It is important to mention, that this had happened only when these 2 circumstances met:
observability was enabled in .bedrock_agentcore.yaml file
observability:
enabled: true
open-telemetry package installed in the agents:
aws-opentelemetry-distro>=0.17.0
Before I do anything, let me check I am able to invoke my agents.
Check that .bedrock_agentcore directory actually exist:
(.venv) 00-PROJECT-FILES % cd agents
(.venv) agents % ls -la
total 344
drwxr-xr-x@ 23 michalsalanci staff 736 Apr 21 14:38 .
drwxr-xr-x@ 25 michalsalanci staff 800 Apr 22 07:15 ..
drwxr-xr-x@ 3 michalsalanci staff 96 Apr 21 14:38 .bedrock_agentcore
-rw-r--r--@ 1 michalsalanci staff 2042 Apr 21 20:36 .bedrock_agentcore.yaml
...
Invoke the agents:
(.venv) agents % agentcore invoke '{"prompt": "Hello"}'
{"type": "status", "step": 1, "source": "supervisor", "message": "Analyzing question..."}
...
╭─────────────────────────────────────────────────────────────── lttm_supervisor_stream ───────────────────────────────────────────────────────────────╮
│ Session: 523058d8-b0aa-480c-8e75-1919721b32d0 │
│ ARN: arn:aws:bedrock-agentcore:us-west-2:~~~~~~~~~~~~:runtime/lttm_supervisor_stream-~~~~~~~~~~ │
│ Logs: aws logs tail /aws/bedrock-agentcore/runtimes/lttm_supervisor_stream-~~~~~~~~~~-DEFAULT --log-stream-name-prefix "2026/04/22/[runtime-logs" │
│ --follow │
│ aws logs tail /aws/bedrock-agentcore/runtimes/lttm_supervisor_stream-~~~~~~~~~~-DEFAULT --log-stream-name-prefix "2026/04/22/[runtime-logs" │
│ --since 1h │
│ GenAI Dashboard: https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#gen-ai-observability/agent-core │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
The Problem
All works so now let's do some damage:
Delete .bedrock_agentcore/ and redeploy the agents.
(.venv) agents % rm -rf .bedrock_agentcore
(.venv) agents %
(.venv) agents % agentcore deploy --auto-update-on-conflict
🚀 Launching Bedrock AgentCore (cloud mode - RECOMMENDED)...
...
❌ Launch failed: Read timeout on endpoint URL:
"https://bedrock-agentcore-codebuild-sources-~~~~~~~~~~~~-us-west-2.s3.us-west-2.amazonaws.com/lttm_supervisor_stream/deployment.zip?uploadId=P
LV.jlOIQ7YSYDOpjQpXuaNgjLvelC8RHRTupuEqZS.5E2RO90m8Gu4HcKXjav9BnSNmbgi_Div_x9RX5KKLuPKHGe9Yv1W8Wd_cvheisOhKQKRIlQgxYJJbPbAgqou_&partNumber=1"
...and it fails
So let's clear uv cache, maybe that helps and let's try again.
(.venv) agents % uv cache clean --force
Clearing cache at: /Users/michalsalanci/.cache/uv
Removed 612792 files (8.8GiB)
(.venv) agents % agentcore launch --auto-update-on-conflict
🚀 Launching Bedrock AgentCore (cloud mode - RECOMMENDED)...
...
✅ Deployment completed successfully - Agent: arn:aws:bedrock-agentcore:us-west-2:960319001022:runtime/lttm_supervisor_stream-WjEvZRCzN9
╭───────────────────────── Deployment Success ─────────────────────────╮
...
And it works!
Goodbye depression!
Victory welcome!
Just for the full picture, let's invoke it:
(.venv) agents % agentcore invoke '{"prompt": "Hello"}'
...
Invocation failed: An error occurred (RuntimeClientError) when calling
the InvokeAgentRuntime operation: Runtime initialization time exceeded.
Please make sure that initialization completes in 30s.
(.venv) agents %
And here we go... endless vicious circle of clearing the uv cache and redeploying starts. Until you realize problem is elsewhere.
Not sure what is worse. The fact that it failed, or that I had 8.8GiB of uv garbage out there.
Goodbye victory!
Welcome back depression...
The Solutions
SOL1: Start from scratch
meaning: destroying the agent, delete .bedrock-agentcore/ and .bedrock-agentcore.yaml, configure with agentcore configure and deploy with agentcore deploy.
On top of that couple of uv clears because of course you forgot.
Sooner or later it works.
This solution seems to me like - "go and born again."
SOL2: Stop shebangs going crazy
As weird as it sounds, the reason why it fails to invoke, are shebangs inside .bedrock-agentcore/<agentcore_runtime_name>/dependencies.zip.
I found a workaround on the internet, saying this:
- unzip
dependencies.zip - get in
/bindirectory - change shebangs in every file from whatever they are, to
#!/usr/bin/env python3 - re-zip
- re-deploy
Changing the shebangs did work for me, but only after I changed them the other way.
Proposed solution - #!/usr/bin/env python3 - did not work for me.
Let's see how my shebangs look like and what actually worked for me.
Get in .bedrock-agentcore/<agentcore_runtime_name>/,
Create a temp directory to unzip dependencies.zip to,
List the actual shebangs.
(.venv) agents % cd .bedrock_agentcore/lttm_supervisor_stream/
mkdir deps_fix
cd deps_fix
unzip ../dependencies.zip
cd bin
...
bin % ls -la
total 88
drwxr-xr-x@ 13 michalsalanci staff 416 Apr 22 13:40 .
drwxr-xr-x@ 106 michalsalanci staff 3392 Apr 22 13:40 ..
-rwxr-xr-x@ 1 michalsalanci staff 459 Apr 22 12:27 bedrock-agentcore
-rwxr-xr-x@ 1 michalsalanci staff 451 Apr 22 12:27 dotenv
-rwxr-xr-x@ 1 michalsalanci staff 443 Apr 22 12:27 httpx
-rwxr-xr-x@ 1 michalsalanci staff 1851 Apr 22 12:27 jp.py
-rwxr-xr-x@ 1 michalsalanci staff 452 Apr 22 12:27 jsonschema
-rwxr-xr-x@ 1 michalsalanci staff 443 Apr 22 12:27 mcp
-rwxr-xr-x@ 1 michalsalanci staff 475 Apr 22 12:27 opentelemetry-bootstrap
-rwxr-xr-x@ 1 michalsalanci staff 486 Apr 22 12:27 opentelemetry-instrument
-rwxr-xr-x@ 1 michalsalanci staff 450 Apr 22 12:27 uvicorn
-rwxr-xr-x@ 1 michalsalanci staff 456 Apr 22 12:27 watchmedo
-rwxr-xr-x@ 1 michalsalanci staff 452 Apr 22 12:27 websockets
(.venv) bin %
Pick opentelemetry-instrumentas an example and see inside:
(.venv) bin % head -3 opentelemetry-instrument
#!/bin/sh
'''exec' '/all/the/way/to/the/root_dir/.venv/bin/python3' "$0" "$@"
' '''
So there it is, this is the bad shebang we have to change:
#!/bin/sh
'''exec' '/all/the/way/to/the/root_dir/.venv/bin/python3' "$0" "$@"
' '''
The shebang that actually works for me is this:
#!/bin/sh
'''exec' 'python3' "$0" "$@"
' '''
With script below, shebangs are changed in every single file inside /bin: directory:
(.venv) deps_fix % for f in bin/*; do
if grep -q '/Users/' "$f" 2>/dev/null; then
sed -i '' "s|'/Users/[^']*python3'|'python3'|" "$f"
echo "Fixed: $f"
fi
done
Fixed: bin/bedrock-agentcore
Fixed: bin/dotenv
Fixed: bin/httpx
Fixed: bin/jp.py
Fixed: bin/jsonschema
Fixed: bin/mcp
Fixed: bin/opentelemetry-bootstrap
Fixed: bin/opentelemetry-instrument
Fixed: bin/uvicorn
Fixed: bin/watchmedo
Fixed: bin/websockets
Pick one file just to verify:
(.venv) deps_fix % head -3 bin/opentelemetry-instrument
#!/bin/sh
'''exec' 'python3' "$0" "$@"
' '''
(.venv) deps_fix %
Next, re-zip back in place and delete temp directory:
(.venv) deps_fix % cd ..
rm dependencies.zip
cd deps_fix
zip -r ../dependencies.zip .
cd ..
rm -rf deps_fix
The moment of truth: redeploy and invoke
(.venv) agents % agentcore launch --auto-update-on-conflict
🚀 Launching Bedrock AgentCore (cloud mode - RECOMMENDED)
...
✅ Deployment completed successfully - Agent: arn:aws:bedrock-agentcore:us-west-2:~~~~~~~~~~~~:runtime/lttm_supervisor_stream-~~~~~~~~~~
╭──────────────────────────────────────────────────────────── Deployment Success ─────────────────────────────────────────────────────────────╮
...
(.venv) agents % agentcore invoke '{"prompt": "Hello"}'
...
╭────────────────────────────────────────────────────────── lttm_supervisor_stream ───────────────────────────────────────────────────────────╮
│ Session: d394f40f-2fc6-4c8f-9d71-43d3926612d6 │
│ ARN: arn:aws:bedrock-agentcore:us-west-2:~~~~~~~~~~~~:runtime/lttm_supervisor_stream-~~~~~~~~~~ │
│ Logs: aws logs tail /aws/bedrock-agentcore/runtimes/lttm_supervisor_stream-WjEvZRCzN9-DEFAULT --log-stream-name-prefix │
│ "2026/04/22/[runtime-logs" --follow │
│ aws logs tail /aws/bedrock-agentcore/runtimes/lttm_supervisor_stream-~~~~~~~~~~-DEFAULT --log-stream-name-prefix │
│ "2026/04/22/[runtime-logs" --since 1h │
│ GenAI Dashboard: https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#gen-ai-observability/agent-core │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
(.venv) agents %
Voilà! Agents are successfully invoked!
The Takeway
I was really thinking for a quite some time how to interpret this and I think I got it:
If you have fat fingers like me (from lifting barbells!), just pay more attention!
Observation from May 2026 - I hit that issue also every time I modified the dependencies in agents/requrements.txt, BUT only when my uv cache WAS NOT freshly pruned.
I guess that's bad news for slim-fingers, no change for fat-fingers though and I still have absolutely no idea how to interpret this.
AWS?
What's next
This article covered the major bug I experienced when observability was enabled.
In the rest of the articles in these series I cover:

Top comments (0)