Data Scientist
Senior data scientist job in Washington, DC
hackajob has partnered with Planet Technologies, a leader in digital transformation and cloud innovation.
Data Scientist
Contract: Full-time
Salary Range: $125,000 - $200,000 annually
Security Clearance: Top Secret (TS)
Planet Technologies is seeking a Data Scientist to support high-impact analytics and automation projects within secure federal environments. This role focuses on developing data-driven solutions, predictive models, and business intelligence tools to enhance decision-making and operational efficiency.
Job Description:
As a Data Scientist, you will design, develop, and implement data solutions to extract insights and support strategic initiatives. You'll collaborate across teams to build predictive models, automate processes, and improve data visualization and integration methods.
Key Responsibilities:
Build, maintain, and enhance dashboards, applications, and automations.
Connect to APIs and establish system-to-system integrations for ETL processes.
Develop predictive risk models and apply AI/ML techniques to analyze unstructured data.
Create and deliver comprehensive data calls, normalization, and visualizations.
Produce business intelligence analyses and visualizations for leadership.
Improve methodologies and tools used for risk assessment.
Research and recommend data storage and integration solutions.
Develop user documentation and guides for BI products.
Utilize data science tools such as SQL, R, and Python in daily operations.
Qualifications:
Strong proficiency with Power Platform, SharePoint, and InfoPath (required).
Solid understanding of machine learning techniques and algorithms.
Experience with data science toolkits such as R, Weka, NumPy, MATLAB, and BI tools like Power BI, Anaconda, or SPSS.
Experience working with or integrating FBI systems (preferred).
2-6+ years of experience in data science or related fields (certifications or equivalent experience may be considered).
Nice to Have:
Familiarity with FBI data systems and federal compliance frameworks.
Experience implementing machine learning for risk modeling or automation.
About hackajob:
hackajob is a recruitment platform that connects you with roles that match your skills and experience. To be considered for this opportunity, create a hackajob account and complete your profile.
hackajob is a recruitment platform that will match you with relevant roles based on your preferences and in order to be matched with the roles you need to create an account with us.
This role requires you to be based in Washington D.C.
Senior Data Scientist
Senior data scientist job in McLean, VA
Purpose:
As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution.
Key Result Areas and Activities:
Design, build, and deploy AI/ML solutions using modern cloud and data platforms.
Lead data science projects across industries, ensuring alignment with business goals.
Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases.
Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark.
Communicate findings effectively through data visualizations and stakeholder presentations.
Drive adoption of MLOps practices for scalable and reliable model deployment.
Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing.
Roles & Responsibilities
Essential Skills
5+ years of experience in applied AI and machine/deep learning.
Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib.
Proficiency in Python and SQL for model development and data manipulation.
Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI.
Familiarity with cloud platforms: AWS, Azure, and GCP.
Solid foundation in advanced statistical methods and probabilistic analysis.
Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management.
Experience with visualization tools like Tableau, Power BI, or similar.
Familiarity with ML workflow orchestration and MLOps practices.
Desirable Skills
Experience with LLMs (Large Language Models) and agentic AI workflows.
Familiarity with modern data platforms like Snowflake.
Exposure to real-time data processing in cloud-native environments.
Contributions to open-source AI projects or publications in data science communities.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field.
Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Data Scientist
Senior data scientist job in Columbia, MD
Data Scientist - Transit Data Focus_Columbia, MD (On-site / hybrid)_Contract (6 Months)
Data Scientist - Transit Data Focus
Employment type: Contract
Duration: 6 Months
Justification: To manage and analyze customer databases, AVA (automated voice announcement), and schedule data for predictive maintenance and service planning.
Experience Level: 3-5 years
Job Responsibilities:
Collect, process, and analyze transit-related datasets including customer databases, AVA (automated voice announcement) logs, real-time vehicle data, and schedule data.
Develop predictive models and data-driven insights to support maintenance forecasting, service planning, and operational optimization.
Design and implement data pipelines to integrate, clean, and transform large, heterogeneous transit data sources.
Perform statistical analysis and machine learning to identify patterns, trends, and anomalies relevant to transit service performance and reliability.
Collaborate with transit planners, maintenance teams, and IT staff to translate data insights into actionable business strategies.
Monitor data quality and integrity; implement data validation and cleansing processes.
Technical Skills & Qualifications:
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Transportation Engineering, or a related quantitative field.
3-5 years of experience working as a data scientist or data analyst, preferably in a transit, transportation, or public sector environment.
Strong proficiency in Python or R for data analysis, statistical modeling, and machine learning.
Experience with SQL for database querying, manipulation, and data extraction.
Familiarity with transit data standards such as GTFS, AVL/CAD, APC (Automated Passenger Counters), and AVA systems.
Experience with data visualization tools such as Power BI, or equivalent.
Data Scientist
Senior data scientist job in McLean, VA
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. The candidate MUST have experience designing and building data infrastructure, developing data pipelines, transforming and preparing data, ensuring data quality and security, and monitoring and optimizing systems. The candidate MUST have extensive experience with Python and AWS. Experience with SQL, multi-data source queries with database technologies (PostgreSQL, MySQL, RDS, etc.), NiFi, Git, Elasticsearch, Kibana, Jupyter Notebooks, NLP, AI, and any data visualization tools (Tableau, Kibana, Qlik, etc.) are desired.
Required Skills and Demonstrated Experience
Demonstrated experience with data engineering, to include designing and building data infrastructure, developing data pipelines, transforming/preparing data, ensuring data quality and security, and monitoring/optimizing systems.
Demonstrated experience with data management and integration, including designing and perating robust data layers for application development across local and cloud or web data sources.
Demonstrated work experience programming with Python
Demonstrated experience building scalable ETL and ELT workflows for reporting and analytics.
Demonstrated experience with general Linux computing and advanced bash scripting
Demonstrated experience with SQL.
Demonstrated experience constructing complex multi-data source queries with database technologies such as PostgreSQL, MySQL, Neo4J or RDS
Demonstrated experience processing data sources containing structured or unstructured data
Demonstrated experience developing data pipelines with NiFi to bring data into a central environment
Demonstrated experience delivering results to stakeholders through written documentation and oral briefings
Demonstrated experience using code repositories such as Git
Demonstrated experience using Elastic and Kibana
Demonstrated experience working with multiple stakeholders
Demonstrated experience documenting such artifacts as code, Python packages and methodologies
Demonstrated experience using Jupyter Notebooks
Demonstrated experience with machine learning techniques including natural language processing
Demonstrated experience explaining complex technical issues to more junior data scientists, in graphical, verbal, or written formats
Demonstrated experience developing tested, reusable and reproducible work
Work or educational background in one or more of the following areas: mathematics, statistics, hard sciences (e.g. Physics, Computational Biology, Astronomy, Neuroscience, etc.) computer science, data science, or business analytics
Desired Skills and Demonstrated Experience
Demonstrated experience with cloud services, such as AWS, as well as cloud data technologies and architecture.
Demonstrated experience using big data processing tools such as Apache Spark or Trino
Demonstrated experience with machine learning algorithms
Demonstrated experience with using container frameworks such as Docker or Kubernetes
Demonstrated experience with using data visualizations tools such as Tableau, Kibana or Apache Superset
Demonstrated experience creating learning objectives and creating teaching curriculum in technical or scientific fields
Location:
McLean, Virginia
This position is onsite and there is no remote availability.
Clearance:
TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Data Scientist TS/SCI with Poly Required
Senior data scientist job in McLean, VA
Our Deloitte AI & Engineering team to transform technology platforms, drive innovation, and help make a significant impact on our clients' success. You'll work alongside talented professionals reimagining and reengineering operations and processes that are critical to businesses. Your contributions can help clients improve financial performance, accelerate new digital ventures, and fuel growth through innovation.
Before applying for this role, please read the following information about this opportunity found below.
Work You'll Do
Deloitte is seeking a Data Scientist to join a high-performing analytics team supporting our client in Chantilly, VA. In this on-site role, you will leverage your expertise in Python, Jupyter notebooks, pandas, and NumPy to develop and maintain data pipelines, apply advanced machine learning techniques, and extract actionable insights from large, complex datasets. You will collaborate closely with mission operations teams and client stakeholders, presenting technical findings to audiences of varying technical backgrounds and responding to high-priority analytic requests. This position offers the opportunity to work on impactful, high-visibility projects at the forefront of national security, while fostering professional growth and innovation in data science and AI/ML solutions. Excellent communication skills and the ability to work effectively under pressure are essential for success in this dynamic environment.
Develop, implement, and maintain data pipelines to process and analyze large, complex datasets in support of the client mission.
Apply advanced data science techniques-including supervised and unsupervised machine learning, spatial analysis, and time-series modeling-to extract actionable insights for mission-critical operations.
Build, test, and optimize analytic methodologies and AI/ML solutions for integration into user-facing technical tools.
Collaborate directly with client stakeholders and mission operations teams to understand requirements, present findings, and tailor analytic solutions to evolving needs.
Document code, workflows, and analytic processes following best practices to ensure reproducibility, transparency, and knowledge sharing within the team.
Prepare and deliver clear, concise technical briefings and presentations to audiences ranging from technical peers to senior leadership and non-technical stakeholders.
Respond rapidly to high-priority analytic requests, adapting to shifting mission priorities while maintaining high standards of technical rigor and communication.
The Team
Deloitte's Government and Public Services (GPS) practice - our people, ideas, technology and outcomes-is designed for impact. Serving federal, state, & local government clients as well as public higher education institutions, our team of professionals brings fresh perspective to help clients anticipate disruption, reimagine the possible, and fulfill their mission promise.
Our AI & Data offering provides a full spectrum of solutions for designing, developing, and operating cutting-edge Data and AI platforms, products, insights, and services. Our offerings help clients innovate, enhance and operate their data, AI, and analytics capabilities, ensuring they can mature and scale effectively.
Qualifications
Required:
Active TS/SCI Clearance w/ Polygraph
Bachelor's degree in Computer Science, Engineering, or a related field.
Must be legally authorized to work in the United States without the need for employer sponsorship, now or at any time in the future
Must be willing to work 100% onsite in Chantilly, VA.
4 + years of experience in Data Analytics, Data Science, and Data Engineering.
5 + years of experience in Advanced Python coding, and modeling.
2 + years of experience working with NumPy, Panda, and Spark.
Preferred:
4 + years of experience building, testing, and optimizing AI/ML solutions.
Active Python certification.
Previous Federal Consulting and/ or professional services experience.
Previous experience presenting to clients or other decision makers to present ideas to various audiences (technical and non-technical). xevrcyc
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Information for applicants with a need for accommodation: CJ1
#LI-MW5
#DSJ
Applied Data Scientists
Senior data scientist job in Rockville, MD
**1\. Role Overview**Mercor is seeking applied data science professionals to support a strategic analytics initiative with a global enterprise. This contract-based opportunity focuses on extracting insights, building statistical models, and informing business decisions through advanced data science techniques. Freelancers will translate complex datasets into actionable outcomes using tools like Python, SQL, and visualization platforms. This short-term engagement emphasizes experimentation, modeling, and stakeholder communication - distinct from production ML engineering. **2\. Key Responsibilities**
● Translate business questions into data science problems and analytical workflows
● Conduct data wrangling, exploratory analysis, and hypothesis testing
● Develop statistical models and predictive tools for decision support
● Create compelling data visualizations and dashboards for business users
● Present findings and recommendations to non-technical stakeholders **3\. Ideal Qualifications**
● 5+ years of applied data science or analytics experience in business settings
● Proficiency in Python or R (pandas, NumPy, Jupyter) and strong SQL skills
● Experience with data visualization tools (e.g., Tableau, Power BI)
● Solid understanding of statistical modeling, experimentation, and A/B testing
● Strong communication skills for translating technical work into strategic insights **4\. More About the Opportunity**
● Remote
● **Expected commitment: min 30 hours/week
● Project duration: ~6 weeks** **5\. Compensation & Contract Terms**
● $75-100/hour
● Paid weekly via Stripe Connect
● You'll be classified as an independent contractor **6\. Application Process**
● Submit your resume followed by domain expertise interview and short form **7.About Mercor**
● Mercor is a talent marketplace that connects top experts with leading AI labs and research organizations
● Our investors include Benchmark, General Catalyst, Adam D'Angelo, Larry Summers, and Jack Dorsey
● Thousands of professionals across domains like law, creatives, engineering, and research have joined Mercor to work on frontier projects shaping the next era of AI
Data Scientist
Senior data scientist job in Reston, VA
• Collect, clean, and preprocess large datasets from multiple sources.
• Apply statistical analysis and machine learning techniques to solve business problems.
• Build predictive models and algorithms to optimize processes and improve outcomes.
• Develop dashboards and visualizations to communicate insights effectively.
• Collaborate with cross-functional teams (Product, Engineering, Risk, Marketing) to identify opportunities for leveraging data.
• Ensure data integrity, security, and compliance with organizational standards.
• Stay current with emerging technologies and best practices in data science and AI.
________________________________________
Required Qualifications
• Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, or related field.
• Strong proficiency in Python, R, SQL, and experience with data manipulation libraries (e.g., Pandas, NumPy).
• Hands-on experience with machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch).
• Solid understanding of statistical modeling, hypothesis testing, and data visualization.
• Experience with big data platforms (e.g., Spark, Hadoop) and cloud environments (AWS, Azure, GCP).
• Excellent problem-solving skills and ability to communicate complex concepts clearly.
________________________________________
Preferred Qualifications
• Experience in risk modeling, financial services, or product analytics.
• Knowledge of MLOps and deploying models in production.
• Familiarity with data governance and compliance frameworks.
________________________________________
Soft Skills
• Strong analytical thinking and attention to detail.
• Ability to work independently and in a team environment.
• Effective communication and stakeholder management skills.
#LI-CGTS
#TS-0455
Data Engineer
Senior data scientist job in McLean, VA
Data Management - Big Data/Machine Learning Engineer - Lead
Duration: 2 years
Pay- $75-$85/hr W2 ONLY, NO C2C
Recommender is a key driver of Incremental PV for Campaign Arbitration on EASE creating over $400MM PV generated via Machine Learning optimization. For 2026 the platform will expand to support Email arbitration gating 100M emails/month.
The platform RTE is expected to increase by 75% through expanded batch data and model training infra.
The additional NAL will critically support RTE for the email expansion as well as add bench strength for campaign RTE. This will free up FTEs for Q1/Q2 KRs to produce the Replay framework and take ownership of the model training infrastructure.
Basic Qualifications:
Bachelor's Degree
At least 3 years of experience in application development (Internship experience does not apply)
At least 1 year of experience in big data technologies
Preferred Qualifications:
5+ years of experience in application development including Python, SQL, Scala, or Java
2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)
3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL)
2+ year experience working on real-time data and streaming applications
2+ years of experience with NoSQL implementation (Mongo, Cassandra)
2+ years of data warehousing experience (Redshift or Snowflake)
3+ years of experience with UNIX/Linux including basic commands and shell scripting
2+ years of experience with Agile engineering practices
Senior Data Scientist, Sustainable Technology
Senior data scientist job in Washington, DC
Our Purpose
Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
Title and Summary
Senior Data Scientist, Sustainable Technology Job Overview:
As the Senior Data Engineer / Scientist for Mastercard's Sustainable Technology, you will be the driving force behind the development and execution of cutting-edge data strategies and data environment frameworks. Your expertise will ensure the effective utilization of data, enabling the delivery of dependable Data & Analytics Services. You will collaborate with cross-functional teams and establish data-related best practices in alignment with Mastercard standards. Responsibilities:
• Design, develop, and maintain new data capabilities and infrastructure for Mastercard's Sustainable Technology Internal Data Lake.
• Create new data pipelines, data transfers, and compliance-oriented infrastructure to facilitate seamless data utilization within on-premise/cloud environments.
• Identify existing data capability and infrastructure gaps or opportunities within and across initiatives and provide subject matter expertise in support of remediation.
• Collaborate with technical teams and business stakeholders to understand data requirements and translate them into technical solutions.
• Work with large datasets, ensuring data quality, accuracy, and performance.
• Implement data transformation, integration, and validation processes to support analytics/BI and reporting needs.
• Optimize and fine-tune data pipelines for improved speed, reliability, and efficiency.
• Implement best practices for data storage, retrieval, and archival to ensure data accessibility and security.
• Troubleshoot and resolve data-related issues, collaborating with the team to identify root causes.
• Document data processes, data lineage, and technical specifications for future reference.
• Participate in code reviews, ensuring adherence to coding standards and best practices.
• Collaborate with DevOps teams to automate deployment and monitoring of data pipelines.
• Additional tasks as required. All About You
Education:
• Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Knowledge / Experience:
• Proven experience as a Data Engineer / Scientist or similar role.
• Deep understanding of data visualization, statistics, hypothesis testing, business intelligence tools, SQL, data cleaning, and data lifecycle management.
• Proficiency in designing and implementing data tools, technologies, and processes.
• Expertise in data engineering, ETL/ELT processes, data warehousing, and data modeling.
• Strong command of data integration techniques and data quality management.
• Hands-on experience with data technologies such as Hadoop, Spark, Python, SQL, Alteryx, NiFi, SSIS, etc.
• Familiarity with cloud platforms and services, such as AWS, GCP, or Azure.
• Excellent problem-solving skills and ability to provide innovative data solutions.
• Strong leadership skills with a proven track record of guiding and mentoring a team.
• 3+ years of experience in related field.
• 3+ years of experience in delivering secure solutions in Financial Services Sector is preferred.
• Broad understanding of Software Engineering Concepts and Methodologies is required.
• Demonstrate MC Core Competencies. Skills/ Abilities:
• Must be high-energy, detail-oriented, proactive and have the ability to function under pressure in an independent environment.
• Experience building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage.
• Expertise in Data Engineering and Data Analysis: implementing multiple end-to-end DW projects in Big Data Hadoop environment.
• Experience working with databases like MS SQL Server, Oracle, and strong SQL knowledge.
• Experience in BI tools like Tableau, Power BI.
• Experience with Alteryx, SSIS, NiFi, Spark, Cloudera Machine Learning, S3 Protocol, PowerBI, NoSQL data structures, Splunk, Databricks, (added advantage)
• Experience automating data flow processes in a Big Data environment.
• Pulling in data from various monitoring platforms to aggregate and enable data science work to support ESG (ie sustainable impact)
• Must provide the necessary skills to have a high degree of initiative and self-motivation to drive results.
• Possesses strong communication skills -- both verbal and written - and strong relationship, collaborative skills and organizational skills.
• Willingness and ability to learn and take on challenging opportunities and to work as a member of matrix based diverse and geographically distributed project team. Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
Corporate Security Responsibility
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
Abide by Mastercard's security policies and practices;
Ensure the confidentiality and integrity of the information being accessed;
Report any suspected information security violation or breach, and
Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
Pay Ranges
O'Fallon, Missouri: $115,000 - $184,000 USD
Data Engineer
Senior data scientist job in Vienna, VA
Seneca Resources is seeking a Data Engineer to help design, build, and optimize scalable data systems that power analytics, reporting, and business intelligence initiatives. In this role, you'll collaborate with business and technical stakeholders to translate data needs into actionable solutions - ensuring performance, quality, and security across cloud-based platforms.
This opportunity is ideal for an experienced data professional passionate about Azure Data Factory, SSIS, MongoDB, and Databricks, who thrives in a collaborative and forward-thinking environment. You'll play a key role in modernizing data pipelines, automating integrations, and improving data-driven decision-making across the enterprise.
Key Responsibilities:
Design, develop, and maintain end-to-end ETL and data pipelines using Azure Data Factory, SSIS, and other integration tools.
Ingest, transform, and load data from diverse sources including APIs, JSON, and NoSQL into structured SQL environments.
Implement and monitor data governance, lineage tracking, and quality standards to ensure consistency and reliability.
Partner with cross-functional teams to define data requirements and build efficient, scalable, and cost-effective solutions.
Optimize and modernize existing data workflows for performance and scalability in cloud-based environments.
Work with MongoDB, Azure SQL, and Databricks to build robust data models and support analytical workloads.
Apply best practices for metadata management, security, and automation using scripting languages like Python or PowerShell.
Collaborate with analytics and reporting teams to enable self-service and operational insights.
Required Skills & Education:
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
3-5 years of hands-on experience in data engineering, ETL development, or data integration.
Strong command of SQL, APIs, and JSON-based data transformations.
Proficiency with Azure Data Factory, SSIS, Databricks, MongoDB, and data warehousing solutions.
Familiarity with Python, PowerShell, R, or C# for automation and transformation tasks.
Solid understanding of data modeling, architecture, and cloud-based ETL design principles.
Excellent problem-solving skills, communication, and a collaborative mindset.
Proven ability to thrive in a hybrid, cross-functional team environment.
Preferred Qualifications:
Experience implementing data governance frameworks and metadata management solutions.
Exposure to modern cloud data platforms such as Azure Synapse, AWS Redshift, or Snowflake.
Understanding of CI/CD pipelines and DevOps practices in data engineering.
About Seneca Resources
At Seneca Resources, we're more than a staffing and consulting firm - we're a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we connect skilled professionals with meaningful opportunities that advance careers and drive innovation.
When you join Seneca, you gain access to competitive pay, comprehensive health benefits, 401(k) plans, and a dedicated team that supports your professional growth every step of the way.
Seneca Resources is proud to be an Equal Opportunity Employer, committed to fostering a diverse and inclusive workplace where all qualified candidates are encouraged to apply.
Data Engineer / Big data Engineer
Senior data scientist job in McLean, VA
Immediate need for a talented Data Engineer / Big data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-93504
Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services.
Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets.
Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning.
Develop backend and automation tools using Golang and/or Python as needed.
Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch.
Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges.
Perform root-cause analysis and implement automation to prevent recurring issues.
Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access.
Ensure compliance with enterprise governance, data quality, and cloud security standards.
Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality.
Key Requirements and Technology Experience:
Proficiency in Python with experience building scalable data pipelines or ETL processes.
Strong hands-on experience with Spark/PySpark for distributed data processing.
Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning.
Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM).
Experience with Golang for scripting, backend services, or performance-critical processes.
Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems.
Familiarity with CI/CD workflows, Git, and automated testing.
Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Sr Data Analytics Engineer
Senior data scientist job in Washington, DC
Hybrid Work Schedule: one day per week in DC + one day per month in Arlington
Salary range: 125-145K
The Senior Data Analytics Engineer is a key contributor in modernizing how data is accessed, processed, and leveraged throughout the enterprise. This role is responsible for breaking down data silos, streamlining workflows, and ensuring timely, reliable data availability to address critical business needs. Acting as a Data Engineer, you will lead initiatives to enhance data pipelines, strengthen infrastructure, and improve data quality, scalability, and resilience.
Operating within an Agile framework, this position collaborates closely with cross-functional teams to design and deliver innovative, data-driven solutions. The ideal candidate will possess deep technical expertise in Databricks, ETL development, and automated data pipelines, with proven experience building and managing large-scale, high-performance data systems.
Key Responsibilities
- Examine complex datasets to identify trends, patterns, risks, and opportunities that inform strategic decisions.
- Build intuitive dashboards and visualizations using Databricks and tools such as Tableau, Power BI, or Excel.
- Produce detailed analytical reports that translate insights into actionable recommendations.
- Perform data cleansing, transformation, and structuring to ensure accuracy and usability.
- Write and optimize SQL queries for data validation and manipulation across multiple systems.
- Apply foundational machine learning or predictive analytics techniques to support forward-looking insights.
- Partner with stakeholders to gather requirements, refine deliverables, and align outputs with business objectives.
- Participate in Agile ceremonies-such as backlog grooming, sprint planning, and testing-to ensure timely delivery of analytics solutions.
- Contribute to cross-functional teams by providing analytical expertise and supporting SDLC activities, including user story documentation and functional testing.
- Develop domain knowledge in assigned business areas to strengthen analytical interpretations and recommendations.
Requirements
- Ability to pass thorough background investigation
- Bachelor's degree or equivalent experience (additional four years in lieu of degree)
- Minimum 5 years of relevant professional experience
- Proficiency in Oracle PLSQL, SQL, Oracle OEM, and Cloudwatch
- Strong background in data modeling, integration, and engineering
- Hands-on experience with Databricks platform
- Familiarity with data governance tools (e.g., Alation)
- Exposure to predictive analytics or machine learning frameworks
- Scripting skills in Python or shell
- Exceptional verbal and written communication skills
- Strong analytical and problem-solving abilities
Data Engineer - with Strong Java Exp.
Senior data scientist job in McLean, VA
W2 ONLY
Former Capital One Exp. Required.
Only local candidates
Role: Data Engineer with Java background
Duration: 12+ Montsh
We are seeking an experienced Data Engineer with a strong Java background to support our data platforms and ETL operations. The ideal candidate will have hands-on experience building and maintaining ETL pipelines, writing complex SQL queries, and applying software engineering best practices. This contract role offers a potential path to full-time conversion based on performance
Job Description:
Strong experience developing ETL jobs and data pipelines
Experience with data warehousing (Snowflake, Redshift, BigQuery)
Exposure to big data tools (Spark, Hadoop, Kafka)
Ability to write and optimize complex SQL queries
Solid software development background in Java (Python is a plus)
Experience working with relational databases and large datasets
Familiarity with cloud environments (AWS/Azure/GCP preferred)
Data Engineer (GCP)
Senior data scientist job in Columbia, MD
Data Engineer
Hybrid (4 days in office)
JOB FUNCTIONS
Responsibilities
Work closely with various business, IT, Analyst and Data Science groups to collect business requirements.
Design, develop, deploy and support high performance data pipelines both inbound and outbound.
Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
Optimize data pipelines for performance, scalability, and reliability.
Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
Document the design and support strategy of the data pipelines
Capture, store and socialize data lineage and operational metadata
Troubleshoot and resolve data engineering issues as they arise.
Develop REST APIs to expose data to other teams within the company.
Mentor and guide junior data engineers.
Required for All Jobs
Performs other duties as assigned
Complies with all policies and standards
QUALIFICATIONS
Education
Required - Bachelor's degree in Computer Science, Computer Engineering, Software Engineering, or other related technical field
Nice to Have - Master's Degree in Computer Science, Computer Engineering, Software Engineering, or other related technical field
Work Experience
Minimum 6 years of experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics
2 years of experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows
Knowledge, Skills and Abilities
Expert knowledge on SQL and Python programming
Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
Experience in tuning queries for performance and scalability
Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
Excellent organizational, prioritization and analytical abilities
Have proven experience working in incremental execution through successful launches.
Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment.
Experience working in agile environment.
Data Scientist
Senior data scientist job in Gaithersburg, MD
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Sr Data Engineer
Senior data scientist job in Washington, DC
The Senior Data Engineer will be instrumental in reshaping how data is collected, processed, and leveraged throughout the organization. This role centers on breaking down data silos, automating processes, and ensuring reliable data delivery to support critical business decisions. The engineer will lead initiatives to enhance data pipelines, strengthen infrastructure, and improve overall data quality, scalability, and performance.
Operating in an Agile environment, this position requires close collaboration with cross-functional teams to design and implement innovative, data-driven solutions. The ideal candidate will possess strong technical expertise in Databricks, ETL development, and automated data workflows, with the ability to build and manage large-scale, high-efficiency data systems.
Responsibilities
Design, build, and maintain data pipelines and ETL processes using Databricks, SQL, and Python.
Develop and optimize batch ETL workflows and integrate Kafka streaming solutions.
Participate in Agile sprints to deliver scalable, mission-aligned data capabilities.
Enhance data quality, governance, and consistency across multiple domains.
Create, test, and deploy automated solutions for extracting, transforming, and loading data between systems.
Engage in backlog refinement, sprint planning, and technical reviews to ensure alignment with business objectives.
Drive continuous improvements in data infrastructure and analytics enablement.
Build subject matter expertise in assigned business areas.
Collaborate with developers to assess operational and reporting impacts throughout the SDLC.
Requirements
Bachelor's degree or equivalent experience (four additional years in lieu of degree).
Minimum of 5 years of relevant experience; commercial experience preferred.
Proficiency in Oracle PLSQL, SQL, Oracle OEM, and Cloudwatch.
Strong background in data modeling, integration, and engineering.
Hands-on experience with Databricks platform.
Familiarity with data governance tools such as Alation.
Skilled in Python or shell scripting.
Excellent written and verbal communication skills.
Strong analytical and problem-solving abilities.
Ability to pass thorough background investigation
Senior Data Engineer.
Senior data scientist job in McLean, VA
Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-84666
Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift.
Build data integration solutions between transaction systems and analytics platforms.
Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
Fundamental understanding of building of data products by data enrichment and ML.
Act as a team player and share knowledge with the existing team members.
Key Requirements and Technology Experience:
Key skills; Python, AWS, SNOWFLAKE
Bachelor's degree in computer science or a related field.
Minimum 5 years of experience in building data driven solutions.
At least 3 years of experience working with AWS services.
Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position.
Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
Strong scripting experience using Python and SQL.
Working knowledge of foundational AWS compute, storage, networking and IAM.
Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
Solid scripting experience in AWS using Lambda functions.
Knowledge of CloudFormation template preferred.
Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client.
Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
Strong understanding of data security - authorization, authentication, encryption, and network security.
Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred.
Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.
Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Lead Data Scientist
Senior data scientist job in Washington, DC
Our Purpose
Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
Title and Summary
Lead Data Scientist Overview
The Security Solutions Data Science team is responsible for creating deep learning Artificial Intelligence (A.I.) and Machine Learning (M.L.) models. The models generated are production ready and created to back specific products in Mastercard's authentication and authorization networks. The Data Science team is also responsible for developing automated processes for creating models covering all modeling steps, from data extraction up to delivery. In addition, the processes must be designed to scale, to be repeatable, resilient, and industrialized. You will be joining a team of Data Scientists and engineers working on innovative A.I. and M.L. attack detection, fraud detection, and anti-money laundering solutions. Our innovative cross-channel AI solutions are applied in Fortune 500 companies in industries such as fin-tech, investment banking, biotech, healthcare, and insurance. We are pursuing a highly motivated individual with strong problem-solving skills to take on the challenge of structuring and engineering data and cutting-edge A.I. model evaluation and reporting processes. Role
As a Lead Data Scientist, you will:
• Work closely with the business owners to understand business requirements, performance metrics regarding data quality and model performance of customer facing products
• Lead the development of advancing attack detection models
• Oversee implementation of data and model development pipelines
• Explore anomalous and fraudulent patterns or trends for feature discovery and enhance attack detection model performance
• Manage the testing of trained models to ensure their robustness and assess their readiness for deployment All About You Essential Skills:
• Data science and data engineering experience
• Knowledge of model optimization techniques and the ability to work closely with data scientists implementing and optimizing models within big data pipelines
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras) and holds strong grasp over data science and machine learning concepts, such as multiple of XGBoost/LightGBM/GBM, Isolation Forest, and clustering
• Strong background in statistics, probability, and linear algebra as applied to machine learning models.
• Experience with SQL language and one or multiple of the following database technologies: Hadoop, Spark, PostgreSQL.
• Good knowledge of Linux / Bash environment
• Python, Pyspark
• Good communication skills
• Highly skilled problem solver
• Exhibits a high degree of initiative and critical thinking
• Graduate degree in CS, or a STEM related field.
• Demonstrated ability to independently contribute to overall team objectives. Nice to have:
• Experience building payment attack or fraud detection models
• Experience in timeseries modeling techniques
• Ph.D. degree in CS, Data Science, Machine Learning, AI or a related STEM field
• Experience in with data engineering in PySpark on petabyte scale data
• Understands and implements methods to evaluate own work and others for error
• Loves working with error-prone, messy, disparate, unstructured data Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
Corporate Security Responsibility
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
Abide by Mastercard's security policies and practices;
Ensure the confidentiality and integrity of the information being accessed;
Report any suspected information security violation or breach, and
Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more.
Pay Ranges
O'Fallon, Missouri: $138,000 - $221,000 USD
Cloud Data Engineer- Databricks
Senior data scientist job in McLean, VA
Purpose:
We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions.
Key Result Areas and Activities:
Design and implement robust, scalable data engineering solutions.
Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI.
Collaborate with analytics and AI teams to enable real-time and batch data workflows.
Support and improve cloud-native data platforms (AWS, Azure, GCP).
Ensure adherence to best practices in data modeling, warehousing, and governance.
Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices.
Implement and maintain workflow orchestration tools like Apache Airflow and dbt.
Roles & Responsibilities
Essential Skills
4+ years of experience in data engineering with a focus on scalable solutions.
Strong hands-on experience with Databricks in a cloud environment.
Proficiency in Spark and Python for data processing.
Solid understanding of data modeling, data warehousing, and architecture principles.
Experience working with at least one major cloud provider (AWS, Azure, or GCP).
Familiarity with CI/CD pipelines and data workflow automation.
Desirable Skills
Direct experience with Unity Catalog and Mosaic AI within Databricks.
Working knowledge of DevOps/DataOps principles in a data engineering context.
Exposure to Apache Airflow, dbt, and modern data orchestration frameworks.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
GCP Data Engineer
Senior data scientist job in Columbia, MD
🚀 Now Hiring: Data Engineer | 6-Month Contract-to-Hire | Lisle, IL & Columbia, MD
Are you passionate about building scalable data solutions and enabling business insights through cutting-edge technology?
We're looking for a Data Engineer to join our team and help design, develop, and optimize high-performance data pipelines in a dynamic and collaborative environment.
📍 Location: Lisle, IL & Columbia, MD (Hybrid 3-4 Days Onsite)
⏳ Duration: 6 Months Contract-to-Hire (CTH)
💼 Responsibilities
Collaborate with Business, IT, Analyst, and Data Science teams to gather and translate business requirements.
Design, develop, deploy, and support high-performance data pipelines (both inbound & outbound).
Model the data platform by applying business logic and building semantic layer objects.
Optimize pipelines for performance, scalability, and reliability.
Implement CI/CD pipelines to ensure continuous deployment and delivery.
Ensure data quality, create remediation plans, and collaborate on root cause resolutions.
Document data pipeline design and support strategies.
Capture, store, and socialize data lineage and operational metadata.
Troubleshoot and resolve data engineering issues.
Develop REST APIs for data exposure across internal teams.
Mentor and guide junior data engineers.
🎓 Education
Required: Bachelor's degree in Computer Science, Computer Engineering, Software Engineering, or related field.
Preferred: Master's degree in related technical discipline.
🧠 Qualifications & Skills
6+ years of experience in data engineering (data platforms, ingestion, management, or analytics).
2+ years of hands-on experience with Google Cloud Platform (GCP) - BigQuery, Composer, GCS, DataStream, Dataflow.
Expert knowledge of SQL and Python.
Experience with Airflow or similar workflow orchestration tools.
Skilled in query tuning for performance and scalability.
Experience in real-time data ingestion (GCP Pub/Sub, Kafka, Spark, etc.).
Strong problem-solving and analytical skills with attention to detail.
Experience working in an Agile environment.
Excellent organizational, communication, and mentoring abilities.