4–8+ years of Python development experience.
Strong experience with data engineering, especially ETL/ELT pipelines.
Experience working with AWS, Azure, or GCP usage & billing datasets.
Hands-on experience with pandas, NumPy, PySpark, or similar data
processing frameworks.
Familiarity with Kubernetes, cloud compute primitives, and distributed
systems.
Experience building dashboards or integrating with BI tools (e.g., Grafana,
Data dog, custom internal tools).
Strong understanding of cloud resource utilization metrics and cost drivers.
Ability to work in a fast-paced, execution-driven environment.