top of page
Home
Technology
Advisor AI Platform - Overview
Advisor - Architect
Advisor - Metrics
Advisor - Transform
Services
Services Overview
AI-Native SWE Catalyst
2nd Pair of Eyes Catalyst
Transformation
SDE Blog
Contact Us
More
Use tab to navigate through the menu items.
Better Decisions mean Better Outcomes
We have lost count of how many capability maturity models have been foisted on the industry. Each of these yardstick approaches purport to determine performance based on their creators view of the world related to software delivery practice. Ironically, these seem to tie in nicely with their biased selection of which general or specific goals are met, but only within the set of prescriptive tactics that are within scope.
This is why we now have so many to choose from - examples being CMMI (Watts Humphrey), Agile Maturity Model (Thoughtworks), DevOps Maturity Model (Gene Kim). We also have community based articulations of capability - like OPM3 from the PMI, COBIT from ISACA and ITSMMM from ITIL. The challenge with all of these fragmented scorecards is they do not have in-the-moment reuse of the empirical data sets (if they even exist publicly) to prevent less than effective choices from being leveraged. Instead, organizations must give themselves to the entire, one-size-fits-all framework, independent of context or culture. Worse, the semantics regarding what a rating of a 3 or a 4 is entirely arbitrary. This is because the scales are relative and only imply statistical averages given a meaningful population. It is arguable that these data sets are heavily influenced by commercial interest and community bias. That is to say, they suffer from the "peek-end rule", whereby the framers seek to make the data fit the framework and not the other way around. We believe this is a fundamental flaw in all existing capability improvement frameworks. At the heart of the self-organization process, a period that is potentially prescriptive for software projects, decisions are being heavily influenced and therefore suffer from a high degree of bias. This makes correlation highly suspect.
With SDE's Decision-centric Capability Improvement (DCCI), we seek to arrive at causality and reuse of proven experience by correlating empirical data to our teams self-organization practice choices. Rather than prescribing sets of practices on teams within highly biased practice-set boundaries, our approach enables choice across the entire spectrum of software development experience.
Additionally, to ensure causality emerges, the other types of decisions that are made within the environment of uncertainty are captured. These also run the risk of bias are captured to provide a holistic view of the ecosystem. This allows for objective observations to be provided to the broader organization regarding efficacy, thereby enhancing credibility when it comes to providing forward looking advice.
Can we help?
bottom of page