Data Engineer, Life Sciences Technology Solutions
Senior data scientist job at Guidehouse
Job Family:
Software Development & Support
Travel Required:
Up to 10%
Clearance Required:
None
What You Will Do:
: The Data Engineer (Life Sciences Technology Solutions) is responsible for designing, building, and maintaining robust data pipelines and backend systems that support scalable software solutions for biopharma clients. Working within the data science and technology domain, this role collaborates with solution architects and full stack developers under the direction of the Life Sciences AI & Data Lead. Success in this position is measured by the ability to deliver reliable, high-performance data infrastructure that enables advanced analytics and digital transformation in life sciences.
Responsibilities and Duties:
Design, develop, and maintain ETL processes and data pipelines for large-scale data integration.
Implement and optimize data storage solutions using SQL and NoSQL databases.
Build and manage big data frameworks such as Hadoop and Spark to support advanced analytics.
Integrate cloud data services, including AWS Glue and Azure Data Factory, into enterprise data workflows.
Develop backend solutions using Python and Java for data processing and transformation.
Collaborate with solution architects and other team members to ensure seamless API integration.
Orchestrate workflows using tools like Airflow and Luigi to automate data movement and processing.
Ensure data quality, governance, and compliance with industry standards.
Implement streaming technologies (Kafka) for real-time data ingestion and processing.
Monitor and tune system performance to maintain reliability and scalability.
Document data engineering processes and provide technical support to project teams.
What You Will Need:
Bachelor's degree in Computer Science, Information Systems, Engineering, or a related STEM field.
Minimum 6 years of experience in data engineering, backend development, or related roles.
Experience interconnecting multiple databases to better understand patient care and population health.
Proficiency in ETL development, data pipeline design, and workflow orchestration.
Competence in machine learning model development and applications.
Advanced programming skills in Python and Javascript.
Knowledge of data warehousing, constructing and integrating API calls, and documenting dataflows.
Demonstrated ability to ensure data quality and governance.
Excellent analytical, problem-solving, and communication skills.
Ability to work collaboratively in a fast-paced, team-oriented environment.
What Would Be Nice To Have:
Master's degree.
Experience building both application stacks in the biopharma industry or consulting environment.
Demonstrated proficiency in building Databricks or Dataiku data pipelines to manage automation and CD/CI activities.
Direct prior responsibility for data management in a biopharma or other life sciences context.
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Parental Leave
401(k) Retirement Plan
Group Term Life and Travel Assistance
Voluntary Life and AD&D Insurance
Health Savings Account, Health Care & Dependent Care Flexible Spending Accounts
Transit and Parking Commuter Benefits
Short-Term & Long-Term Disability
Tuition Reimbursement, Personal Development, Certifications & Learning Opportunities
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Care.com annual membership
Employee Assistance Program
Supplemental Benefits via Corestream (Critical Care, Hospital Indemnity, Accident Insurance, Legal Assistance and ID theft protection, etc.)
Position may be eligible for a discretionary variable incentive bonus
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplyData Scientist
Verona, NY jobs
About the Company
Our client is hiring a Data Scientist to join a growing Strategic Analytics function focused on turning complex data into actionable business insights.
About the Role
This is a hands-on, high-impact role for someone who loves solving problems, building models, and consulting directly with the business. This position goes beyond execution - the Data Scientist will actively shape projects, apply predictive modeling and AI solutions, and clearly articulate insights to stakeholders.
Responsibilities
Extract, manipulate, and analyze large and complex datasets across the organization
Develop predictive models and AI-driven solutions
Build and maintain dashboards and reporting in Power BI
Consult with business partners to identify opportunities and translate insights into action
Work with enterprise-wide data assets, including detailed operational datasets
Clearly explain analytical findings to non-technical stakeholders
Qualifications
Technical Requirements
Strong proficiency in SQL and Python
Ability to write code and develop creative, scalable solutions
Experience with AI applications and Power BI strongly preferred
Required Skills
Strong proficiency in SQL and Python
Ability to write code and develop creative, scalable solutions
Experience with AI applications and Power BI strongly preferred
Preferred Skills
Experience with AI applications and Power BI strongly preferred
Senior Data Scientist
Birmingham, AL jobs
We are seeking a Senior Data Scientist to lead data-driven innovation and deliver actionable insights that shape strategic decisions. In this role, you will collaborate with product, design, and engineering teams to develop advanced analytical models, optimize business processes, and build scalable data solutions. The work will be focused on automating the integration of disparate, unstructured data into a structured system-a process that was previously manual, time-consuming, and prone to errors. You will work with cutting-edge technologies across Python, AWS, Azure, and IBM Cloud (preferred) to design and deploy predictive models and machine learning algorithms in production environments.
Key Responsibilities:
Act as a senior data strategist, identifying and integrating new datasets into product capabilities.
Work will be geared towards use cases regarding automation opportunities where disparate data will be restructured into a system to improve accuracy in data extraction, resulting in improved operational efficiency and enhanced data quality.
Partner with engineering teams to build and enhance data products and pipelines.
Execute analytical experiments and develop predictive models to solve complex business challenges.
Collect, clean, and prepare structured and unstructured datasets for analysis.
Build and optimize algorithms for large-scale data mining, pattern recognition, and predictive modeling.
Analyze data for trends and actionable insights to inform business decisions.
Deploy analytical models to production in collaboration with software developers and ML engineers.
Stay current with emerging technologies, cloud platforms, and industry best practices.
Required Skills & Education:
7+ years of experience in data science or advanced analytics.
Strong expertise in Python and proficiency in SQL.
Hands-on experience with AWS and Azure; familiarity with IBM Cloud is a bonus.
Advanced knowledge of data mining, statistical analysis, predictive modeling, and machine learning techniques.
Ability to work effectively in a dynamic, research-oriented environment with multiple projects.
Bachelor's degree in Statistics, Applied Mathematics, Computer Science, or related field (or equivalent experience).
Excellent communication skills to present insights to technical and non-technical stakeholders.
Preferred Qualifications:
2+ years of project management experience.
Relevant professional certifications (AWS, Azure, Data Science, Machine Learning).
About Seneca Resources:
At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact.
When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.
Senior Data Scientist
Birmingham, AL jobs
We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes.
What You'll Do
• Partner with business leaders to understand needs and define measurable goals
• Gather requirements, build project plans, manage deadlines, and communicate updates
• Analyze large structured and unstructured datasets
• Build, evaluate, implement, and maintain predictive models
• Present results to both technical and non-technical stakeholders
• Deploy models and monitor ongoing performance and data accuracy
• Contribute ideas, stay current with industry trends, and support team development
Lead-Level Opportunities Include:
• Driving data science strategy and overseeing project delivery
• Providing technical mentorship and leadership to the team
• Promoting innovation and exploring emerging tech, tools, and methodologies
What We're Looking For
• Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field
• 3-6 years of relevant experience (advanced degrees may reduce required experience)
• Strong skills in machine learning, statistical modeling, and data analysis
• Proficiency in Python or R
• Experience with large datasets, preprocessing, and feature engineering
• Prior management experience
• Experience with transfer learning
• Experience building and deploying deep learning solutions
• Strong communication skills and ability to present complex concepts clearly
• Experience in life insurance or a related domain is a plus
• Ability to independently manage projects end-to-end
Qualifications
• Master's or PhD
• Industry experience in similar roles
• Publications or patents in data science or ML
• Experience collaborating across technical and business teams
• Familiarity with software engineering best practices and version control
• Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.)
Rooted in Birmingham. Focused on You.
We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching.
At our clients' request, only individuals with required experience will be considered.
Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities.
Your resume will never be submitted to a client without your prior knowledge and consent to do so.
Data Scientist
New York, NY jobs
Senior Data Scientist - Sports & Entertainment
Our client, a premier Sports, Entertainment, and Hospitality organization, is hiring a Senior Data Scientist. In this position you will own high-impact analytics projects that redefine how predictive analytics influence business strategy. This is a pivotal role where you will build and deploy machine learning solutions-ranging from Bayesian engagement scoring to purchase-propensity and lifetime-value models-to drive fan acquisition and revenue growth.
Requirements:
Experience: 8+ years of professional experience using data science to solve complex business problems, preferably as a solo contributor or team lead.
Education: Bachelor's degree in Data Science, Statistics, Computer Science, or a related quantitative field (Master's or PhD preferred).
Tech Stack: Hands-on expertise in Python, SQL/PySpark, and ML frameworks (scikit-learn, XGBoost, TensorFlow, or PyTorch).
Infrastructure: Proficiency with cloud platforms (AWS preferred) and modern data stacks like Snowflake, Databricks, or Dataiku.
MLOps: Strong experience in productionizing models, including version control (Git), CI/CD, and model monitoring/governance.
Location: Brooklyn, NY (4 days onsite per week)
Compensation: $100,000 - $150,000 + Bonus
Benefits: Comprehensive medical/dental/vision, 401k match, competitive PTO, and unique access to live entertainment and sports events.
Data Scientist with ML
Reston, VA jobs
Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support.
MUST have experience with Python, PyTorch, Flask (knowledge at minimum with ability to quickly pickup), Familiarity with REST APIs (at minimum), Statistics background/experience, Basic understanding of NLP.
Desired skills for a candidate include experience performance R&D with natural language processing, deploying CNN and LLMs or foundational models, deploying ML models on multimedia data, experience with Linux System Administration (or bash), experience with Android Configuration, experience in embedded systems (Raspberry Pi).
Required Skills and Demonstrated Experience
Demonstrated experience in Python, Javascript, and R.
Demonstrated experience employing machine learning and deep learning modules such as Pandas, Scikit, Tensorflow, Pytorch.
Demonstrated experience with statistical inference, as well as building and understanding predictive models, using machine learning methods.
Demonstrated experience with large-scale text analytics.
Desired Skills
Demonstrated hands-on experience performing research or development with natural language processing and working with, deploying, and testing Convolutional Neural Networks (CNN), large-language models (LLMs) or foundational models.
Demonstrated experience developing and deploying testing and verification methodologies to evaluate algorithm performance and identify strategies for improvement or optimization.
Demonstrated experience deploying machine learning models on multimedia data, to include joint text, audio, video, hardware, and peripherals.
Demonstrated experience with Linux System Administration and associated scripting languages (Bash)
Demonstrated experience with Android configuration, software development, and interfacing.
Demonstrated experience in embedded systems (Raspberry Pi)
Develops and conducts independent testing and evaluation methods on research-grade algorithms in applicable fields.
Reports results and provide documentation and guidance on working with the research-grade algorithms.
Evaluates, Integrates and leverage internally-hosted data science tools.
Customize research grade algorithms to be optimized for memory and computational efficiency through quantizing, trimming layers, or through custom methods
Location:
Reston, Virginia
This position is onsite and there is no remote availability.
Clearance:
Active TS/SCI with Full Scope Polygraph
Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements.
Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
Machine Learning Data Scientist
Pittsburgh, PA jobs
Machine Learning Data Scientist
Length: 6 Month Contract to Start
* Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.*
Job Description:
We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data.
Essential Duties and Responsibilities:
Develop ML models for time-series forecasting and anomaly detection
Build data pipelines for SCADA/IIoT data ingestion and processing
Perform feature engineering and signal analysis on time-series data
Deploy models in production using APIs, microservices, and MLOps best practices
Collaborate with data engineers and domain experts to improve data quality and model performance
Qualifications:
Strong Python skills
Experience working with SCADA systems or industrial data historians
Solid understanding of time-series analytics and signal processing
Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker)
POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment.
Come see why DPC has achieved:
4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor.
Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine.
As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today!
*******************
Job #18135
Data Scientist
Phoenix, AZ jobs
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Data Modeler
Austin, TX jobs
We are seeking a Data Modeler role in Austin, TX.
Onsite role
The Data Modeler will design, develop, and maintain complex data models that support higher education data initiatives. This role requires expertise in data modeling, database design, and data governance to ensure accurate and efficient data storage, retrieval, and processing. This position will work with cross functional teams and an outside vendor to logically model the flow of data between agency systems. The ideal candidate will have experience in higher education, financial, or government data systems, working with relational and non-relational databases, and implementing best practices in data architecture.
Required skills:
4 years of experience in data modeling, database design, and data architecture.
Experience developing conceptual, logical, and physical data models.
Excellent communication skills, both verbal and written.
Proven ability to work on projects, ensuring timely completion within budget.
Proficiency in SQL and database management systems such as SQL Server, Oracle, or PostgreSQL.
Ability to implement data governance frameworks and ensure data quality.
Knowledge of ETL processes and data integration methodologies.
Experience documenting requirements for IT and business solutions that will meet program and user needs.
Experience working in cross-functional teams with business analysts, developers, and data engineers.
Experience working with sensitive data in higher education, financial, or government sectors.
Preferred Skills:
Experience in Agile development and backlogs.
This is a long-term contract opportunity for an On-site role in Austin, TX. No sponsorship can be provided. Candidates must be able to pass a background check. If this interests you, please send your resume to *****************************
Luna Data Solutions, Inc. provides equal employment opportunities to all employees. All applicants will be considered for employment, and prohibits discrimination and harassment of any type without regard to age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and disability status.
Senior Data Analytics Engineer
Columbus, OH jobs
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
Expert-level proficiency (10/10) in:
Python
PySpark
Strong working knowledge of:
SQL and other programming languages
Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
Experience with BI/Visualization tools such as Tableau
Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
Familiarity with machine learning workflows or MLOps frameworks.
Knowledge of metadata management, data governance, and data lineage tools.
Data Engineer (Mid-Level)
Orlando, FL jobs
Job Title: Data Engineer (Mid-Level)
Employment Type: 6 months contract to hire
About the Role
We are seeking a highly skilled professional who can bridge the gap between data engineering and data analysis. This role is ideal for someone who thrives on building robust data models and optimizing existing data infrastructure to drive actionable insights. 70% engineering/30% analyst
Key Responsibilities
· Design and implement data models for service lines that currently lack structured models.
· Build and maintain scalable ETL pipelines to ensure efficient data flow and transformation.
· Optimize and enhance existing data models and processes for improved performance and reliability.
· Collaborate with stakeholders to understand business requirements and translate them into technical solutions.
· Must have excellent communication skills
Required Skills
· SQL (must-have): Advanced proficiency in writing complex queries and optimizing performance.
· Data Modeling: Strong experience in designing and implementing relational and dimensional models.
· ETL Development: Hands-on experience with data extraction, transformation, and loading processes.
· Alteryx or Azure Data Factory (ADF) for pipeline development.
· Excel proficiency
· Experience with BI tools and data visualization (Power BI, Tableau).
Bachelor's degree required
Thanks and Regards
Ashish Tripathi || US IT RecruiterKPG99,INC
******************| ***************
3240 E State, St Ext | Hamilton, NJ 08
Senior Data Engineer
Indianapolis, IN jobs
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
ML Engineer with Timeseries data experience
Atlanta, GA jobs
Role: ML Engineer with Timeseries data experience
Hybrid in Atlanta, GA (locals preferred)
$58/hr on C2C, Any Visa
Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference.
Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data.
Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment.
Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions.
Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability.
Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark.
ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics
Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
LEAD SNOWFLAKE DATA ENGINEER
Minneapolis, MN jobs
Job Title: Lead Snowflake Data Engineer
Employment Type: 6-month Contract-to-Hire
Work Arrangement: On-site (4 days/week)
Eligibility: U.S. Citizen or Green Card holders only
Experience Level: 7+ years
Role Overview
We are seeking a Lead Snowflake Data Engineer to design, build, and optimize modern cloud-based data platforms. This role requires deep hands-on expertise with Snowflake, strong SQL skills, cloud data engineering experience, and the ability to lead and mentor a team of data engineers.
Required Qualifications
7+ years of experience in data engineering or related roles
5-10 years of hands-on experience with Snowflake
Strong proficiency in SQL, including complex query development and stored procedures
Experience with automation and scripting (e.g., Python, Shell, or similar)
Hands-on experience with data ingestion and transformation frameworks
Strong understanding of Snowflake architecture, including storage, compute, security, and infrastructure
Proven experience with Snowflake troubleshooting and performance tuning
Experience with cloud platforms such as AWS and/or Azure
Solid understanding of Cloud Data Lakehouse architectural patterns
Experience leading, mentoring, and providing technical direction to data engineering teams
Proven ability to work closely with business partners to develop and manage data domains
Preferred / Additional Skills
Experience in one or more of the following areas is highly desirable:
Programming languages (e.g., Python, Java, Scala)
Relational and non-relational databases
ETL / ELT tools and frameworks
Data storage solutions (on-premises and cloud-based)
Big data technologies
Machine learning or advanced analytics
Data modeling and data visualization tools
Cloud computing and data security best practices
Data Engineer
Charlotte, NC jobs
Experience Level: Mid (5-7 Years) W2 ONLY - NO 3RD PARTIES PLEASE CONTRACT / C2H Role Objectives • These roles will be part of the Data Strategy team spanning across the Client Capital Markets teams. • These roles will be involved in the active development of the data platform in close coordination with the Client team, beginning with the establishment of a reference data system for securities and pricing data, and later moving to other data domains.
• The consulting team will need to follow internal developments standards to contribute to the overall agenda of the Data Strategy team.
Qualifications and Skills
• Proven experience as a Data Engineer with experience in Azure cloud.
• Experience implementing solutions using -
• Azure cloud services
• Azure Data Factory
• Azure Lake Gen 2
• Azure Databases
• Azure Data Fabric
• API Gateway management
• Azure Functions
• Well versed with Azure Databricks
• Strong SQL skills with RDMS or no SQL databases
• Experience with developing APIs using FastAPI or similar frameworks in Python
• Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes
• Good understanding of ETL/ELT processes
• Experience in financial services industry, financial instruments, asset classes and market data are a plus.
Data Engineer
Charlotte, NC jobs
C# Senior Developer RESOURCE TYPE: W2 Only Charlotte, NC - Hybrid Mid (5-7 Years) Role Description A leading Japanese bank is in the process of driving a Digital Transformation across its Americas Division as it continues to modernize technology, strengthen its data-driven approach, and support future growth. As part of this initiative, the firm is seeking an experienced Data Engineer to support the design and development of a strategic enterprise data platform supporting Capital Markets and affiliated securities businesses.
This role will contribute to the development of a scalable, cloud-based data platform leveraging Azure technologies, supporting multiple business units across North America and global teams.Role Objectives
Serve as a member of the Data Strategy team, supporting broker-dealer and swap-dealer entities across the Americas Division.
Participate in the active development of the enterprise data platform, beginning with the establishment of reference data systems for securities and pricing data, and expanding into additional data domains.
Collaborate closely with internal technology teams while adhering to established development standards and best practices.
Support the implementation and expansion of the strategic data platform on the bank's Azure Cloud environment.
Contribute technical expertise and solution design aligned with the overall Data Strategy roadmap.
Qualifications and Skills
Proven experience as a Data Engineer, with strong hands-on experience in Azure cloud environments.
Experience implementing solutions using:
Azure Cloud Services
Azure Data Factory
Azure Data Lake Gen2
Azure Databases
Azure Data Fabric
API Gateway management
Azure Functions
Strong experience with Azure Databricks.
Advanced SQL skills across relational and NoSQL databases.
Experience developing APIs using Python (FastAPI or similar frameworks).
Familiarity with DevOps and CI/CD pipelines (Git, Jenkins, etc.).
Strong understanding of ETL / ELT processes.
Experience within financial services, including exposure to financial instruments, asset classes, and market data, is a strong plus.
Data Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Senior Data Engineer
Boston, MA jobs
first PRO is now accepting resumes for a Senior Data Engineer role in Boston, MA. This is a direct hire role and onsite 2-3 days per week.
RESPONSIBILITIES INCLUDE
Support and enhance the firm's Data Governance, BI platforms, and data stores.
Administer and extend data governance tools including Atlan, Monte Carlo, Snowflake, and Power BI.
Develop production-quality code and data solutions supporting key business initiatives.
Conduct architecture and code reviews to ensure security, scalability, and quality across deliverables.
Collaborate with the cloud migration, information security, and business analysis teams to design and deploy new applications and migrate existing systems to the cloud.
TECHNOLOGY EXPERIENCE
Hands-on experience supporting SaaS, business-facing applications.
Expertise in Python for data processing, automation, and production-grade development.
Strong knowledge of SQL, data modeling, and data warehouse design (Kimball/star schema preferred).
Experience with Power BI or similar BI/reporting tools.
Familiarity with data pipeline technologies and orchestration tools (e.g., Airflow, dbt).
Experience with Snowflake, Redshift, BigQuery, or Athena.
Understanding of data governance, data quality, and metadata management frameworks.
QUALIFICATIONSBS or MS in Computer Science, Engineering, or a related technical field.
7+ years of professional software or data engineering experience.
Strong foundation in software design and architectural patterns.