Every transaction (stuff that gets done) becomes meaningful when seen through the right dimensions (perspectives, because marketing and sales always agree with accounts!). That’s why I use star schema thinking on every assignment—clarity, context, and perspectives built in.

Aspect Kimbal's Motivation (I like Kimbal)
User-Centric Design Edgar (Ted) F. Codd invented the relational model and formalised both relational algebra and relational calculus—laying the foundation for modern data systems. While the term OLAP (Online Analytical Processing) was popularised later, Codd’s influence reached deep into analytical processing. Ralph Kimball, together with Margy Ross and the Kimball Group, focused on making relational databases more effective for analytical (OLAP-style) workloads. They championed dimensional modelling—especially the star schema—which intentionally trades strict normalisation for performance, usability, and alignment with how business users naturally think. My seminal conclusion is their bottom-up methodology brought analytics closer to real-world decision-making. In short, Kimball, Ross, and their team translated complex data into something usable, scalable, and intuitive—and did so brilliantly. So why not build your data understanding on a foundation of star schema thinking?
Simplicity and Ease of Use (increased autonomy) The star schema gives a clear, intuitive structure that mirrors user understanding, enabling self-service analysis with less reliance on  technical intermediaries.
Query Performance Optimized for performance, star schemas respond quickly to users’ queries, supporting real-time insights and fluid exploration of business questions.
Data Consistency Conformed dimensions ensure users see consistent definitions and metrics across reports, building trust and reducing confusion.
Flexibility A single star schema can support diverse reporting needs and dashboards, adapting to users’ evolving analytical questions.
Scalability Business processes can be added incrementally, allowing data warehouse growth to follow user demand and organizational priorities.
Separation of Concerns Keeping facts and dimensions distinct reflects how users separate events from descriptive context, aiding cognitive clarity.
Historical Tracking Slowly Changing Dimensions (SCDs) model how context changes over time, crucial for trend, audit and ownership analysis.
Agility in Development Star schemas are easy to sketch, build and refine, keeping pace with users’ shifting needs in agile business environments. Excel supports star schemas useful for early proof of concept activities.
Aspect Best Practice Concern if Absent
Use of Standard Models Star Schemas, Snowflake Schemas, 3NF, Data Vault, etc. are well-documented, proven, and widely adopted. Invented models may lack scalability, interoperability, and clarity—making governance and analytics harder.
Transparency Clear, auditable structures allow for validation, collaboration, and easy onboarding. Custom diagrams may obscure assumptions, hide poor design choices, or create consultant dependency.
Alignment with Tools BI tools (like Power BI, Tableau, Looker) are optimized for star schema-type models. Non-standard models often require complex, inefficient queries and diminish performance.
Client Empowerment Good models enable in-house teams to maintain, enhance, and use data confidently. Vague or unique diagrams can make the client overly reliant on the consultancy.

Star schema is effective across all stages of a project: 

STAGE ROLE OF STAR SCHEMA BENEFITS
Requirements Elicitation Helps identify key business entities (e.g., Employees, Companies, Requisitions) and measurable events (facts) Clarifies scope early by focusing on facts and dimensions that align with user needs
Process Modelling Maps business processes (e.g., purchase request approval) to fact-dimension relationships Visualizes how data flows through systems; supports stakeholder understanding
Solution Design Guides the logical and physical design of data tables using a central fact table with surrounding dimensions Promotes consistency, modularity, and scalability of the solution
Solution Build Guides the logical and physical design of data tables using a central fact table with surrounding dimensions Promotes consistency, modularity, and scalability of the solution
Integration Testing Enables validation of data consistency and correctness through test cases across joined fact and dimension tables Easier to test due to normalized structures and deterministic joins
User Acceptance Testing (UAT) Users validate that facts and dimensions reflect the business logic and expected outputs Increases confidence and traceability, as data is easy to verify by business users
Analytics Supports fast, aggregated queries for trends, anomalies, and patterns using metrics from the fact table Enables powerful, performant analytics over large datasets
Business Intelligence (BI) Star schema is the foundation for semantic models in tools like Power BI, Tableau, or Looker Provides intuitive structure for drag-and-drop exploration and dashboarding
Reporting Powers tabular and visual reports (e.g., PR totals by department, employee, date) Reports remain performant and readable due to dimensional clarity and optimized joins
Historic Data Selection (a function of owner and time) A star schema is ideal for repeatable, transparent reporting, especially in self-service BI tools like Power BI. Its structured fact-and-dimension model supports reusable filters, easy slicing across dimensions, and systematic handling of changes over time. Procedural logic is better suited for one-off data extractions or complex filtering that’s hard to model. While it offers precise control, it lacks transparency and reusability, making it less efficient for evolving or repeatable reporting needs.

Data centricity upstages procedural centricity

Aspect Option 1: Star Schema Structure Option 2: Procedural Logic
Definition A data model where a central fact table (e.g., Purchase Requisition) is surrounded by dimension tables (e.g., Employee, Company Code, Cost Center). A set of step-by-step procedural rules written in code (e.g., SQL stored procedures, Python logic) to determine entitlement dynamically.
Lookup or Legwork: Algorithm is efficient when more than one occurrence is possible Fact Table: Purchase Requisition Dimensions: Employee, Company, Cost_Center, Material, Plant Algorithm is inefficient when more than one occurrence is possible IF employee.company_code = 112233 AND NOT EXISTS in 445566 THEN allow entitlement ELSE reject.
Reusability High – data model is reusable for different queries and reporting Low – logic must be rewritten or refactored for other use cases
Performance Optimized with joins and indexed tables; faster for large-scale analytics Slower for large datasets; logic must scan and evaluate for each transaction
Transparency Easy to understand data relationships; business users can query Logic is often hidden in code; requires technical knowledge
Auditability Clear lineage and logs via tables Harder to trace logic changes over time
Scalability Scales better with large enterprise systems Procedural logic becomes complex and error prone as data volume grows
Entitlement Flexibility Entitlement rules can be modelled through relationships in dimensions (e.g., linking only active employees in Newco) Requires logic to handle every exception and state manually