The following table summarises security components and layers

STAR Evidence & Interview Stories

Requirement STAR (Situation • Task • Action • Result) Interview story (in my voice)
Proven experience leading end-to-end data migration projects in complex environments S: Zurich Insurance migrating policy/claims to Guidewire across multiple regions and vendors.
T: Lead the migration end-to-end under tight timelines with zero operational impact.
A: Defined strategy and cutover plan; ran mock→dry→final cycles; owned source→target mappings, QA gates, and runbooks; chaired RAID.
R: On-time cutover with 100% load coverage on critical objects and a quiet hypercare.
At Zurich I owned the whole migration chain into Guidewire. I set the strategy, locked the plan, and drove mock and dry runs until the numbers behaved. We baselined mappings, built sensible QA gates, and used a simple runbook so go-live was uneventful—the good kind. Critical objects hit 100% and hypercare was boring, which is my favourite metric.
Strong knowledge of ETL/ELT tools, data modelling, and data governance S: NFU Mutual DWH behind schedule; Nationwide needed a future-proof model; LBG needed traceability.
T: Stabilise pipelines, modernise ELT, and make models/governance stick.
A: Re-scoped to Matillion→Snowflake ELT; stood up semantic models and a modelling team; implemented lineage with CMDB/Collibra.
R: Faster time-to-data, clear stewardship, and auditors who stopped frowning.
I’m tool-agnostic but opinionated. I moved NFU to Matillion→Snowflake to cut friction, set up a proper semantic model at Nationwide, and wired LBG’s lineage into CMDB/Collibra so audit didn’t involve guesswork. The point isn’t the badge on the box—it’s clean patterns that people can support on a Monday morning.
Experience with data quality management and data validation techniques S: Multiple sources with uneven quality feeding target platforms.
T: Prevent bad data from poisoning the cutover.
A: Put DQ rules at ingest, reconciliation reports after each load, sampling on high-risk fields, and failure playbooks; embedded first-time-match targets.
R: First-pass loads exceeded thresholds; defects trended down sprint-over-sprint; no show-stoppers at go-live.
I treat DQ like brakes on a car—meant to go fast safely. We set rule packs at ingest, reconciled every load, and targeted first-time match. When something failed, the playbook told the team what to do next—not what to panic about. By go-live the error rate was tame and we didn’t spend hypercare fixing basics.
Strong understanding of relational and non-relational databases S: Mixed estate: Oracle/SQL Server/DB2 operational data, Snowflake analytics, Cassandra for event use-cases.
T: Choose the right store and design the hand-offs.
A: Normalised operational models where needed; star/snowflake in analytics; used Cassandra only for time-series/event workloads; designed CDC (GoldenGate/ODI) and batch/stream ELT patterns.
R: Right-sized platforms, predictable performance, simpler support.
I’m fluent across relational and NoSQL, but I don’t make everything a nail. OLTP stays tidy and normalised; analytics gets stars; events go to something that actually likes events. The joins happen in the right place with CDC or ELT doing the heavy lifting. It’s boring architecture—on purpose.
Excellent stakeholder management and communication skills S: S/4HANA divestment with 12 senior Finance stakeholders and multiple vendors.
T: Align scope, decisions, and pace.
A: Ran decision-focused workshops, published one-page visuals, and chaired weekly RAID with clear owners and dates.
R: Single, signed baseline; faster decisions; fewer escalations.
On the divestment I had twelve senior Finance voices, all valid, not all aligned. I used simple visuals, a hard-nosed RAID, and made every meeting end in a decision. We signed a single baseline and never looked back. My rule: less theatre, more movement.
Ability to manage cross-functional teams and multiple workstreams simultaneously S: Multi-workstream delivery with on/offshore teams (15–20 people) and external partners.
T: Keep momentum without creating a PMO circus.
A: Daily flow checks, weekly integrated plan, dependency burn-down, and a visible scorecard for scope/risk/quality.
R: Improved cadence (+50%), reduced cycle time (−60%), and no missed stage gates.
I like lightweight structure that people actually use. We ran a single integrated plan, surfaced dependencies early, and held everyone to a visible scorecard. Cadence went up, cycle time came down, and stage gates stopped being cliff edges.
Solid problem-solving and decision-making skills with a strategic mindset S: Azure DWH programme sliding right; stakeholders losing confidence.
T: Recover delivery and protect the long-term architecture.
A: Root-caused bottlenecks, cut low-value scope, re-sequenced for fastest value, and locked an architecture guardrail so fixes didn’t turn into debt.
R: Back on schedule, zero unplanned downtime, and a runway that scaled.
When things wobble I get curious, not loud. At NFU I stripped out noise, re-sequenced for value, and set guardrails so we didn’t ‘win’ the week and lose the future. We landed the dates, avoided firefighting, and left something maintainable.