🗓️ This Week
Finally finished the Cyber Security 101 learning path and discovered the AI Security Learning Path on TryHackMe
Completed ...
For further actions, you may consider blocking this person and/or reporting abuse
Great work so far! I was wondering about your goals for this year since it looks like you are doing a bit of Web Dev, Security, and IOS development?
Continue posting and good work Umitomo :D
Thank you so much! I really appreciate you following my posts 😊
That's a great point — I’ll start including my yearly goals from my next post.
For this year, my goals are:
Thanks for the suggestion! I’ll keep improving step by step.
TryHackMe's AI Security path is underrated. working through prompt injection stuff changed how I look at every LLM integration - you start noticing the attack surface everywhere.
Thanks for your comment — I really appreciate it!
Totally agree. Working through the prompt injection modules really changed how I think about LLM integrations too. I’ve started noticing potential attack surfaces everywhere.
once that lens clicks it's hard to look at any LLM call the same way. treating every external data source the model reads as a potential injection vector - slows you down at first but you ship more defensively
Thanks for your comment!
I completely agree. Since I started working through the AI Security path on TryHackMe, I’ve really felt my perspective on AI change compared to before. I’m starting to see things from multiple angles now.
I’ll definitely keep going little by little.
tryhackme's AI security path is solid for this - the injection labs build a mental model you can't unlearn. you'll start seeing it in places that aren't obvious: tool outputs, retrieval results, even 'safe' structured data your agent pulls in. the multi-angle thing compounds faster than you'd expect.
Thanks for your comment! It’s really interesting to hear real-world feedback about the AI Security path from someone working in the field.
I’ve been interested in building applications with AI integrations for a while, so I’m hoping to keep learning in a way that I can apply to real development as well.
The point about model cards being important but often incomplete stuck with me. It's one of those things that sounds like a documentation problem on the surface, but I think it points to something deeper about how we're building the AI supply chain.
When you learned that most models rely on Common Crawl and that training decisions can introduce security risks, it connects back to the same issue—there's this long chain of dependencies where each link assumes the previous one did its due diligence. The base model inherits risks from the training data, the fine-tuned model inherits risks from the base model, and the application inherits risks from all of it. Model cards were supposed to make that chain traceable, but they're only as good as the weakest audit in the stack.
That parallel between your debugging practice (using
poand stepping through breakpoints) and what you're learning about AI security is interesting, even if unintentional. You're learning to trace execution state in one context while discovering how hard it is to trace provenance in another. One has mature tooling, the other barely has conventions.Are you finding that the AI security material is changing how you think about the apps you're building in SwiftUI, or do those still feel like separate learning tracks for now?
Thank you so much for your thoughtful comment — I really appreciate it!
That point about tracing execution state vs. tracing data provenance really stood out to me as well. I hadn’t thought about it that way at all, but it makes a lot of sense.
I actually started learning AI security because I’m exploring how to use AI in my work, and TryHackMe released content on it at the perfect time. As I’ve been learning, I feel like my understanding of AI — especially its risks — has become much clearer.
For SwiftUI, I’m trying not to overreach and instead focus on building things step by step within my current understanding. Because of what I’ve learned about AI security, I think I’ve become a bit more cautious about integrating AI features into applications.
For now, they still feel like separate learning tracks, but I feel like they might connect more over time.
Thanks again for sharing your perspective — it gave me a lot to think about!