Software Support Engineer
Data engineer job at Brooksource
As the Software Support Engineer, you will be the first line of support to troubleshoot and resolve customer reported technical and operational issues. This position is a unique blend of Software Development, Quality Assurance Testing, on-site implementation and installation, and support. Bachelor's degree in a related technical field is required, as well as a passion for software development and testing.
MINIMUM QUALIFICATIONS:
• Bachelor's degree in a related field (Computer Science, Computer Information Systems, Software Engineering, etc.)
• Experience designing software applications and a strong understanding of the Software Development Life cycle
• Experience troubleshooting and problem solving
• Customer service-friendly personality
• Prefer familiarity with relational databases (MySQL preferred)
• Prefer experience with Python and C development
• Prefer experience in Linux environment
• Prior technical support experience is a plus
RESPONSIBILITIES:
• Software Testing and onsite implementations/installation
• Provide customer support to help assist with operational and system issues
• Design, maintain and modify existing applications
• Take responsibility for multiple tasks in multiple projects simultaneously
• Review existing code and make required modifications
• Submit all designs and code modifications for peer review
• Write, maintain and execute test plans
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Data Scientist
Verona, NY jobs
About the Company
Our client is hiring a Data Scientist to join a growing Strategic Analytics function focused on turning complex data into actionable business insights.
About the Role
This is a hands-on, high-impact role for someone who loves solving problems, building models, and consulting directly with the business. This position goes beyond execution - the Data Scientist will actively shape projects, apply predictive modeling and AI solutions, and clearly articulate insights to stakeholders.
Responsibilities
Extract, manipulate, and analyze large and complex datasets across the organization
Develop predictive models and AI-driven solutions
Build and maintain dashboards and reporting in Power BI
Consult with business partners to identify opportunities and translate insights into action
Work with enterprise-wide data assets, including detailed operational datasets
Clearly explain analytical findings to non-technical stakeholders
Qualifications
Technical Requirements
Strong proficiency in SQL and Python
Ability to write code and develop creative, scalable solutions
Experience with AI applications and Power BI strongly preferred
Required Skills
Strong proficiency in SQL and Python
Ability to write code and develop creative, scalable solutions
Experience with AI applications and Power BI strongly preferred
Preferred Skills
Experience with AI applications and Power BI strongly preferred
Data Engineer
Stillwater, MN jobs
Title: Sr. Data Engineer
Duration: 6-Month Contract/Right to Hire
Responsibilities
Ability to trace data lineage and diagnose data quality issues at the source
Strong problem-solving skills and ability to work independently
Qualifications
5+ years hands-on experience with enterprise data warehouses and ETL/ELT pipelines
Strong Databricks experience required
Expert-level SQL skills; Python scripting experience preferred
Experience optimizing data feeds, scheduling, and pipeline timing
Experience consolidating data sources and deprecating legacy feeds
Retail or grocery industry experience required (understanding of POS data, item costing, merchant data)
Cloud platform experience (Azure or GCP)
The projected hourly rate for this position is $75-100/hr.
On-Demand Group (ODG) provides employee benefits which includes healthcare, dental, and vision insurance. ODG is an equal opportunity employer that does not discriminate on the basis of race, color, religion, gender, sexual orientation, age, national origin, disability, or any other characteristic protected by law.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $ 80 - 90 /hour, depending on experience
Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team!
Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from various sources ensuring consistency and reliability
Develop automation workflows and BI solutions
Mentor others and advise on data process best practices
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of data engineering experience
Experience with PI
Experience with SCADA
Experience with Palantir
Experience with large oil and gas datasets
Proficiency in Python and SQL
Hands-on experience in cloud environments (Azure, AWS, GCP)
Nice to Have Skills & Experience:
Familiarity with Apache Kafka or Flink
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $80 - $90/hour, depending on experience
Inceed has partnered with a great company to help find a skilled Data Engineer to join their team!
Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from multiple sources ensuring quality and reliability
Develop automation workflows and BI solutions
Mentor others and contribute to the knowledge base
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of experience in data engineering
Experience with large oil and gas datasets
Proficiency in SQL and Python
Hands-on experience in cloud environments (Azure, AWS, or GCP)
Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs
Nice to Have Skills & Experience:
Experience with Palantir Foundry
Knowledge of query federation platforms
Experience with modern data stack tools like dbt or Airflow
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Bloomington, MN jobs
Key Responsibilities
Design, build, and maintain scalable data pipelines for ingesting, cleaning, and transforming provider data.
Develop and optimize workflows in Databricks for large-scale data processing.
Implement and manage data storage solutions using the Microsoft Azure suite, including Azure Data Lake, Blob Storage, and Azure SQL.
Collaborate with API developers and data consumers to ensure seamless API data consumption.
Work closely with data scientists, analysts, and product owners to ensure data quality, consistency, and availability.
Contribute to the evolution of our data lake and warehouse architecture to support current and future analytics needs.
Required Qualifications
Hands-on experience with Databricks and Apache Spark.
Proficient in SQL, Python, PySpark, Git
Strong proficiency with Microsoft Azure cloud services, especially in data storage and compute.
Proven experience with data lakes and/or data warehouses.
Solid understanding of REST APIs and experience consuming them in data workflows.
Experience with data ingestion, ETL/ELT pipelines, and data cleaning techniques.
Preferred Qualifications
Hands-On Experience with PowerBI
Experience building or maintaining GraphQL APIs.
Experience designing and developing REST APIs.
Familiarity with AI/ML integration in data pipelines or analytics workflows.
Knowledge of healthcare data standards and provider data models is a plus.
The projected hourly range for this position is $65.00 to $85.00
On-Demand Group (ODG) provides employee benefits which includes healthcare, dental, and vision insurance. ODG is an equal opportunity employer that does not discriminate on the basis of race, color, religion, gender, sexual orientation, age, national origin, disability, or any other characteristic protected by law.
Data Engineer
Hamilton, NJ jobs
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
IT Data Engineer
Lakewood, CO jobs
IT Data Engineer
Compensation: $125k-$155k (DOE)
Inceed has partnered with a great company to help find a skilled IT Data Engineer to join their team!
Join a dynamic team where innovation meets opportunity. This role is pivotal in advancing AI and data modernization initiatives, bridging traditional database administration with cutting-edge AI data infrastructure. The team thrives on collaboration and offers a hybrid work schedule.
Key Responsibilities & Duties:
Design and maintain scalable data pipelines.
Develop RAG workflows for AI information access.
Build secure connectors and APIs for data retrieval.
Monitor and optimize data flows for consistency.
Lead database administration and performance tuning.
Manage database upgrades and storage optimization.
Implement database security controls and standards.
Support application integrations and data migrations.
Define and maintain data models and metadata.
Collaborate with teams to ensure compliance requirements.
Required Qualifications & Experience:
Bachelor's degree in Computer Science or related field.
7+ years in database administration or data engineering.
Advanced SQL and data modeling skills.
Experience with AI and analytics data pipelines.
Familiarity with cloud-based data ecosystems.
Hands-on experience with RAG and vectorization.
Proficiency in scripting languages like Python.
Experience leading vendor-to-internal transitions.
Nice to Have Skills & Experience:
Experience integrating enterprise systems into data platforms.
Knowledge of data governance frameworks.
Understanding of semantic data modeling.
Experience with cloud migration of database workloads.
Perks & Benefits:
This opportunity includes a comprehensive and competitive benefits package-details will be shared during later stages of the hiring process.
Other Information:
Hybrid work schedule
This position requires a background check and drug test
If you are interested in learning more about the IT Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
ML Engineer with Timeseries data experience
Atlanta, GA jobs
Role: ML Engineer with Timeseries data experience
Hybrid in Atlanta, GA (locals preferred)
$58/hr on C2C, Any Visa
Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference.
Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data.
Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment.
Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions.
Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability.
Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark.
ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics
Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
LEAD SNOWFLAKE DATA ENGINEER
Minneapolis, MN jobs
Job Title: Lead Snowflake Data Engineer
Employment Type: 6-month Contract-to-Hire
Work Arrangement: On-site (4 days/week)
Eligibility: U.S. Citizen or Green Card holders only
Experience Level: 7+ years
Role Overview
We are seeking a Lead Snowflake Data Engineer to design, build, and optimize modern cloud-based data platforms. This role requires deep hands-on expertise with Snowflake, strong SQL skills, cloud data engineering experience, and the ability to lead and mentor a team of data engineers.
Required Qualifications
7+ years of experience in data engineering or related roles
5-10 years of hands-on experience with Snowflake
Strong proficiency in SQL, including complex query development and stored procedures
Experience with automation and scripting (e.g., Python, Shell, or similar)
Hands-on experience with data ingestion and transformation frameworks
Strong understanding of Snowflake architecture, including storage, compute, security, and infrastructure
Proven experience with Snowflake troubleshooting and performance tuning
Experience with cloud platforms such as AWS and/or Azure
Solid understanding of Cloud Data Lakehouse architectural patterns
Experience leading, mentoring, and providing technical direction to data engineering teams
Proven ability to work closely with business partners to develop and manage data domains
Preferred / Additional Skills
Experience in one or more of the following areas is highly desirable:
Programming languages (e.g., Python, Java, Scala)
Relational and non-relational databases
ETL / ELT tools and frameworks
Data storage solutions (on-premises and cloud-based)
Big data technologies
Machine learning or advanced analytics
Data modeling and data visualization tools
Cloud computing and data security best practices
Data Engineer
Houston, TX jobs
We are currently seeking a Data Engineer to join the Big Data and Advanced Analytics department. As part of the Data Engineering team, the Data Engineer will work closely with the Data Science team and Business functions to solve real-world oil and gas midstream problems using machine learning, data science and artificial intelligence. This individual will provide technical and thought leadership to the team to build out a data engineering practice within the organization.
Must Have Skills
Python Pandas, Numpy, Pytest, Scikit-Learn
SQL
Apache Airflow
Kubernetes
CI/CD
Git
Test Driven Development (TDD)
API development
Knowledgeable of Machine Learning
Responsibilities
Create, test, and maintain data pipeline architectures
Work independently on analytic projects for multiple business functions
Automate manual data flows for repeated use and scalability
Develop data-intensive applications with API's
Develop algorithms to transform data into useful, actionable information
Operationalize mathematical models and machine learning models
Assist data analysts and data scientists with data processing automation and deployment
Implement data quality checks to ensure data accuracy and consistency
Senior Data Engineer
Indianapolis, IN jobs
Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward.
RESPONSIBILITIES:
Design, develop, and refine BI focused data architecture and data platforms
Work with internal teams to gather requirements and translate business needs into technical solutions
Build and maintain data pipelines supporting transformation
Develop technical designs, data models, and roadmaps
Troubleshoot and resolve data quality and processing issues
Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows
Mentor and support junior team members
REQUIREMENTS:
5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling
5+ years of experience across end-to-end data analysis and development
Experience using GIT version control
Advanced SQL skills
Strong experience with AWS cloud
PREFERRED SKILLS:
Experience with Snowflake
Experience with Python or R
Bachelor's degree in an IT-Related field
TERMS:
This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
Azure Data Engineer
Jersey City, NJ jobs
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Data Engineer - Medical
Durham, NC jobs
Our client in Durham NC, a company in the healthcare industry is seeking to hire a Software Engineer, hybrid remote, full-time.
Candidates must be a GC holder or a US citizen, (no H1B or OPT visa)
Requirements:
• Python (production experience)
• SQL (hands-on querying with large datasets)
• Google Cloud Platform (GCP) (real-world usage)
• Experience working with large-scale or distributed data systems
• Healthcare / HealthTech experience
• Exposure to medical data or regulated data environments
(e.g., medical imaging, clinical data, HIPAA/GDPR compliance)
STRONG TECH PREFERENCES
• BigQuery
• Dataflow (major positive)
• Apache Beam
• App Engine
• Cloud Functions (GCP preferred)
• GitHub / GitHub Actionsark
Senior Data Engineer
Boston, MA jobs
first PRO is now accepting resumes for a Senior Data Engineer role in Boston, MA. This is a direct hire role and onsite 2-3 days per week.
RESPONSIBILITIES INCLUDE
Support and enhance the firm's Data Governance, BI platforms, and data stores.
Administer and extend data governance tools including Atlan, Monte Carlo, Snowflake, and Power BI.
Develop production-quality code and data solutions supporting key business initiatives.
Conduct architecture and code reviews to ensure security, scalability, and quality across deliverables.
Collaborate with the cloud migration, information security, and business analysis teams to design and deploy new applications and migrate existing systems to the cloud.
TECHNOLOGY EXPERIENCE
Hands-on experience supporting SaaS, business-facing applications.
Expertise in Python for data processing, automation, and production-grade development.
Strong knowledge of SQL, data modeling, and data warehouse design (Kimball/star schema preferred).
Experience with Power BI or similar BI/reporting tools.
Familiarity with data pipeline technologies and orchestration tools (e.g., Airflow, dbt).
Experience with Snowflake, Redshift, BigQuery, or Athena.
Understanding of data governance, data quality, and metadata management frameworks.
QUALIFICATIONSBS or MS in Computer Science, Engineering, or a related technical field.
7+ years of professional software or data engineering experience.
Strong foundation in software design and architectural patterns.
Data Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist
Phoenix, AZ jobs
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments