Responsible AI attribution frameworks & professional AI training
15 years consulting β 9 years Red Hat β 4 years Boston University. Training the next generation of professionals to use AI responsibly and effectively.
Every idea connects to another βexploring how 25+ years of platform innovation creates unexpected bridges between domains.
Recent insights on AI attribution, professional training, and responsible integration.
The single-assistant fantasy breaks down as soon as AI touches real work. Different tasks have different trust boundaries, which means privacy has to be expressed in the architecture, not buried in settings.
The useful shift in agentic work is not one smarter agent. It is role separation: one layer scopes and governs the work, another executes against a contract, and a reviewer decides whether the result stands.
Three different eras, three different reasons, the same solution: leave a call open all day. The pattern keeps getting reinvented because the need it solves was never optional.
In 2015 I designed a governance model for a consulting collective with dynamic membership, shared ownership, and a rotating elected president. The consulting collective trend arrived a decade later. The structural problems I was trying to solve are the same ones people are running into now.
Most AI acknowledgments are too vague to be useful. Process transparency gives teams a practical, auditable way to describe human-AI work without pretending the model is an author.
My journey from platform consulting to Red Hat engineering to Boston University initiatives has shaped a unique perspective on responsible AI integration. I focus on building attribution frameworks that make human-AI collaboration transparent and effective.
Learn More About My WorkSpeaking engagements, initiative partnerships, professional AI training, and consulting on responsible GenAI integration.
Start a Conversation