Post job

Senior Data Scientist jobs at Stantec

- 2272 jobs
  • Senior Data Scientist

    Seneca Resources 4.6company rating

    Hoover, AL jobs

    Sr. Data Scientist Clearance Requirements: None Contract to Hire Pay Rate: $70 - $80 per hour We are seeking a highly skilled and innovative Senior Data Scientist to design advanced analytical solutions that drive strategic decision-making and business performance. This role will serve as a key contributor in shaping data strategy, developing predictive models, and delivering actionable insights that influence product, engineering, and business initiatives. Top candidates will bring deep expertise in statistical modeling, machine learning, data mining, and large-scale analytics-paired with the ability to translate complex findings into clear, meaningful outcomes for stakeholders. This is an excellent opportunity for a motivated data scientist who thrives in dynamic, research-oriented environments and enjoys building impactful data-driven solutions. Core Responsibilities: Lead data strategy efforts by identifying, evaluating, and integrating new datasets to enhance product capabilities. Partner closely with engineering, product, and design teams to understand business needs and build scalable data solutions. Develop, test, and refine statistical and machine learning models to support predictive analytics and operational improvements. Source, collect, and preprocess structured and unstructured datasets for modeling and experimentation. Conduct exploratory data analysis to uncover trends, patterns, and opportunities that align with business priorities. Build algorithms for large-scale data mining, including model validation, optimization, and error analysis. Deploy models into production environments in partnership with software developers and ML engineers. Communicate insights and recommendations clearly to technical and non-technical stakeholders. Stay current with emerging tools, technologies, and best practices in data science and AI/ML. Required Skills/Education: 7+ years of experience in data science, machine learning, or advanced analytics. Strong expertise in statistical analysis, data mining, pattern recognition, and predictive modeling. Proficiency in SQL, Python, Java, SAS, and data visualization tools such as Tableau and Excel. Experience designing, validating, and deploying analytical or ML models in production environments. Ability to manage multiple projects in a fast-paced, research-focused environment. Excellent communication skills with the ability to simplify complex concepts for diverse audiences. Preferred Qualifications: Bachelor's degree in Statistics, Applied Mathematics, Computer Science, or a related field (or equivalent experience). 2+ years of project management or technical leadership experience. Professional certifications in data science, machine learning, cloud technologies, or related areas. Experience with big data platforms (Spark, Hadoop) or cloud ML tools (AWS, Azure, GCP). About Seneca Resources At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact. When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way. Seneca Resources is proud to be an Equal Opportunity Employer, committed to fostering a diverse and inclusive workplace where all qualified individuals are encouraged to apply.
    $70-80 hourly 3d ago
  • Senior Data Scientist

    Seneca Resources 4.6company rating

    Birmingham, AL jobs

    We are seeking a Senior Data Scientist to lead data-driven innovation and deliver actionable insights that shape strategic decisions. In this role, you will collaborate with product, design, and engineering teams to develop advanced analytical models, optimize business processes, and build scalable data solutions. The work will be focused on automating the integration of disparate, unstructured data into a structured system-a process that was previously manual, time-consuming, and prone to errors. You will work with cutting-edge technologies across Python, AWS, Azure, and IBM Cloud (preferred) to design and deploy predictive models and machine learning algorithms in production environments. Key Responsibilities: Act as a senior data strategist, identifying and integrating new datasets into product capabilities. Work will be geared towards use cases regarding automation opportunities where disparate data will be restructured into a system to improve accuracy in data extraction, resulting in improved operational efficiency and enhanced data quality. Partner with engineering teams to build and enhance data products and pipelines. Execute analytical experiments and develop predictive models to solve complex business challenges. Collect, clean, and prepare structured and unstructured datasets for analysis. Build and optimize algorithms for large-scale data mining, pattern recognition, and predictive modeling. Analyze data for trends and actionable insights to inform business decisions. Deploy analytical models to production in collaboration with software developers and ML engineers. Stay current with emerging technologies, cloud platforms, and industry best practices. Required Skills & Education: 7+ years of experience in data science or advanced analytics. Strong expertise in Python and proficiency in SQL. Hands-on experience with AWS and Azure; familiarity with IBM Cloud is a bonus. Advanced knowledge of data mining, statistical analysis, predictive modeling, and machine learning techniques. Ability to work effectively in a dynamic, research-oriented environment with multiple projects. Bachelor's degree in Statistics, Applied Mathematics, Computer Science, or related field (or equivalent experience). Excellent communication skills to present insights to technical and non-technical stakeholders. Preferred Qualifications: 2+ years of project management experience. Relevant professional certifications (AWS, Azure, Data Science, Machine Learning). About Seneca Resources: At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact. When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.
    $94k-124k yearly est. 2d ago
  • Senior Data Scientist

    Pangeatwo 3.6company rating

    Birmingham, AL jobs

    We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes. What You'll Do • Partner with business leaders to understand needs and define measurable goals • Gather requirements, build project plans, manage deadlines, and communicate updates • Analyze large structured and unstructured datasets • Build, evaluate, implement, and maintain predictive models • Present results to both technical and non-technical stakeholders • Deploy models and monitor ongoing performance and data accuracy • Contribute ideas, stay current with industry trends, and support team development Lead-Level Opportunities Include: • Driving data science strategy and overseeing project delivery • Providing technical mentorship and leadership to the team • Promoting innovation and exploring emerging tech, tools, and methodologies What We're Looking For • Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field • 3-6 years of relevant experience (advanced degrees may reduce required experience) • Strong skills in machine learning, statistical modeling, and data analysis • Proficiency in Python or R • Experience with large datasets, preprocessing, and feature engineering • Prior management experience • Experience with transfer learning • Experience building and deploying deep learning solutions • Strong communication skills and ability to present complex concepts clearly • Experience in life insurance or a related domain is a plus • Ability to independently manage projects end-to-end Qualifications • Master's or PhD • Industry experience in similar roles • Publications or patents in data science or ML • Experience collaborating across technical and business teams • Familiarity with software engineering best practices and version control • Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.) Rooted in Birmingham. Focused on You. We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching. At our clients' request, only individuals with required experience will be considered. Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities. Your resume will never be submitted to a client without your prior knowledge and consent to do so.
    $79k-109k yearly est. 2d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Plainfield, NJ jobs

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 1d ago
  • Data Scientist with ML

    Kavaliro 4.2company rating

    Reston, VA jobs

    Kavaliro is seeking a Data Scientist to provide highly technical and in-depth data engineering support. MUST have experience with Python, PyTorch, Flask (knowledge at minimum with ability to quickly pickup), Familiarity with REST APIs (at minimum), Statistics background/experience, Basic understanding of NLP. Desired skills for a candidate include experience performance R&D with natural language processing, deploying CNN and LLMs or foundational models, deploying ML models on multimedia data, experience with Linux System Administration (or bash), experience with Android Configuration, experience in embedded systems (Raspberry Pi). Required Skills and Demonstrated Experience Demonstrated experience in Python, Javascript, and R. Demonstrated experience employing machine learning and deep learning modules such as Pandas, Scikit, Tensorflow, Pytorch. Demonstrated experience with statistical inference, as well as building and understanding predictive models, using machine learning methods. Demonstrated experience with large-scale text analytics. Desired Skills Demonstrated hands-on experience performing research or development with natural language processing and working with, deploying, and testing Convolutional Neural Networks (CNN), large-language models (LLMs) or foundational models. Demonstrated experience developing and deploying testing and verification methodologies to evaluate algorithm performance and identify strategies for improvement or optimization. Demonstrated experience deploying machine learning models on multimedia data, to include joint text, audio, video, hardware, and peripherals. Demonstrated experience with Linux System Administration and associated scripting languages (Bash) Demonstrated experience with Android configuration, software development, and interfacing. Demonstrated experience in embedded systems (Raspberry Pi) Develops and conducts independent testing and evaluation methods on research-grade algorithms in applicable fields. Reports results and provide documentation and guidance on working with the research-grade algorithms. Evaluates, Integrates and leverage internally-hosted data science tools. Customize research grade algorithms to be optimized for memory and computational efficiency through quantizing, trimming layers, or through custom methods Location: Reston, Virginia This position is onsite and there is no remote availability. Clearance: Active TS/SCI with Full Scope Polygraph Applicant MUST hold a permanent U.S. citizenship for this position in accordance with government contract requirements. Kavaliro provides Equal Employment Opportunities to all employees and applicants. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Kavaliro is committed to the full inclusion of all qualified individuals. In keeping with our commitment, Kavaliro will take the steps to assure that people with disabilities are provided reasonable accommodations. Accordingly, if reasonable accommodation is required to fully participate in the job application or interview process, to perform the essential functions of the position, and/or to receive all other benefits and privileges of employment, please respond to this posting to connect with a company representative.
    $74k-105k yearly est. 3d ago
  • Data Scientist

    Ledgent Technology 3.5company rating

    Wisconsin jobs

    We are seeking a Data Scientist to deliver predictive analytics and actionable insights that enhance financial forecasting and supply chain performance. This role will partner with business leaders and analysts to design models that inform strategic decisions. You will work primarily within Microsoft Fabric, leveraging Delta Lake/OneLake and Medallion Architecture (Bronze-Silver-Gold) to build scalable solutions and lay the groundwork for future AI-driven capabilities. This is a full-time, direct hire role which will be onsite in Mendota Heights, MN. Local candidates only. Target salary is between $120,000-140,000. Candidates must be eligible to work in the United States without sponsorship both now and in the future. No C2C or third parties. Key Responsibilities Develop and deploy machine learning models for cost modeling, sales forecasting, and long-term work order projections. Analyze large, complex datasets to uncover trends, anomalies, and opportunities for operational improvement. Collaborate with finance, supply chain, and business teams to translate challenges into data-driven solutions. Work with engineering teams to create robust pipelines for data ingestion, transformation, and modeling using cloud-native tools. Utilize Azure services (Data Lake, Synapse, ML Studio) to operationalize models and manage workflows. Present insights through clear visualizations and executive-level presentations. Contribute to governance standards, audit trails, and model documentation. Qualifications Education & Certifications Bachelor's degree required; Master's in Computer Science, IT, or related field preferred. Cloud certifications (Azure or similar) are a plus. Experience & Skills 5+ years as a Data Scientist or similar role. Hands-on experience with Microsoft Fabric, Azure Synapse, and related cloud technologies. Proficiency in Python, R, SQL, and visualization tools (Power BI, Tableau). Strong background in financial modeling, cost allocation, and supply chain analytics. Familiarity with Oracle and Salesforce UI navigation is helpful. Excellent business acumen and ability to communicate complex concepts to senior leadership. Strong problem-solving skills and ability to design scalable solutions. Preferred Experience with Azure Machine Learning. Knowledge of Jitterbit is a plus. All qualified applicants will receive consideration for employment without regard to race, color, national origin, age, ancestry, religion, sex, sexual orientation, gender identity, gender expression, marital status, disability, medical condition, genetic information, pregnancy, or military or veteran status. We consider all qualified applicants, including those with criminal histories, in a manner consistent with state and local laws, including the California Fair Chance Act, City of Los Angeles' Fair Chance Initiative for Hiring Ordinance, and Los Angeles County Fair Chance Ordinance. For unincorporated Los Angeles county, to the extent our customers require a background check for certain positions, the Company faces a significant risk to its business operations and business reputation unless a review of criminal history is conducted for those specific job positions.
    $120k-140k yearly 1d ago
  • Machine Learning Data Scientist

    Digital Prospectors 4.1company rating

    Pittsburgh, PA jobs

    Machine Learning Data Scientist Length: 6 Month Contract to Start * Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.* Job Description: We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data. Essential Duties and Responsibilities: Develop ML models for time-series forecasting and anomaly detection Build data pipelines for SCADA/IIoT data ingestion and processing Perform feature engineering and signal analysis on time-series data Deploy models in production using APIs, microservices, and MLOps best practices Collaborate with data engineers and domain experts to improve data quality and model performance Qualifications: Strong Python skills Experience working with SCADA systems or industrial data historians Solid understanding of time-series analytics and signal processing Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker) POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment. Come see why DPC has achieved: 4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor. Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine. As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today! ******************* Job #18135
    $67k-91k yearly est. 1d ago
  • Data Scientist

    Us Tech Solutions 4.4company rating

    Washington, DC jobs

    Duration: 6 Months with possibly extension AI Engineer 1. Background and Context The AI Engineer will play a pivotal role in designing, developing, and deploying artificial intelligence solutions that enhance operational efficiency, automate decision-making, and support strategic initiatives for the environmental and social specialists in the Bank. This role is central to the VPU's digital transformation efforts and will contribute to the development of scalable, ethical, and innovative AI systems. 2. Qualifications and Experience Education Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or related field. Experience Minimum 3 years of experience in AI/ML model development and deployment. Experience with MLOps tools (e.g., MLflow), Docker, and cloud platforms (AWS, Azure, GCP). Proven track record in implementing LLMs, RAG, NLP model development and GenAI solutions. Technical Skills Skilled in - Azure AI/Google Vertex Search, Vector Databases, fine-tuning the RAG, NLP model development, API Management (facilitates access to different sources of data) Proficiency in Python, TensorFlow, PyTorch, and NLP frameworks. Expertise deep learning, computer vision, and large language models. Familiarity with REST APIs, NoSQL, and RDBMS. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Strategic thinking and innovation mindset. 3. Certifications (Preferred) Microsoft Certified: Azure AI Engineer Associate Google Machine Learning Engineer SAFe Agile Software Engineer (ASE) Certification in AI Ethics 4. Objectives of the Assignment Develop and implement AI models and algorithms tailored to business needs. Integrate AI solutions into existing systems and workflows. Ensure ethical compliance and data privacy in all AI initiatives. Support user adoption through training and documentation. Support existing AI solutions by refinement, troubleshooting, and reconfiguration 5. Scope of Work and Responsibilities AI Solution Development Collaborate with cross-functional teams to identify AI opportunities. Train, validate, and optimize machine learning models. Translate business requirements to technical specifications. AI Solution Implementation Develop code, deploy AI models and into production environments, and conduct ongoing model training Monitor performance and troubleshoot issues and engage in fine-tuning the solutions to improve accuracy Ensure compliance with ethical standards and data governance policies. User Training and Adoption Conduct training sessions for stakeholders on AI tools. Develop user guides and technical documentation. Data Analysis and Research Collect, preprocess, and engineer large datasets for machine learning and AI applications. Recommend and Implement Data Cleaning and Preparation Analyze and use structured and unstructured data (including geospatial data) to extract features and actionable insights. Monitor data quality, detect bias, and manage model/data drift in production environments. Research emerging AI technologies and recommend improvements. Governance, Strategy, Support, and Maintenance Advise client Staff on AI strategy and policy implications Contribute to the team's AI roadmap and innovation agenda. Provide continuous support and contribute towards maintenance and future enhancements. 4. Deliverables Work on Proof of Concepts to study the technical feasibility of AI Use Cases Functional AI applications integrated into business systems. Documentation of model/application architecture, training data, and performance metrics. Training materials and user guides. Develop, train, and deploy AI models tailored to business needs. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************ US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Name: Pooja Rani Email: ****************************** Internal Id: 25-53638
    $75k-110k yearly est. 4d ago
  • Data Scientist

    The Intersect Group 4.2company rating

    Phoenix, AZ jobs

    We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level. Day-to-Day Responsibilities Compare and validate numbers across multiple data systems Investigate discrepancies and understand how metrics are derived Perform data science and data analysis tasks Build and maintain AI/ML models using Python Interpret model results, fine-tune algorithms, and iterate based on findings Validate and reconcile data from different sources to ensure accuracy Work with sales and production data to produce item-level pricing recommendations Support ongoing development of a new data warehouse and create queries as needed Review Power BI dashboards (Power BI expertise not required) Contribute to both ML-focused work and general data science responsibilities Improve and refine an existing ML pricing model already in production Qualifications Strong proficiency with MS SQL Server Experience creating and deploying machine learning models in Python Ability to interpret, evaluate, and fine-tune model outputs Experience validating and reconciling data across systems Strong foundation in machine learning, data modeling, and backend data operations Familiarity with querying and working with evolving data environments
    $76k-109k yearly est. 3d ago
  • Senior Data Engineer

    Bayforce 4.4company rating

    Charlotte, NC jobs

    **NO 3rd Party vendor candidates or sponsorship** Role Title: Senior Data Engineer Client: Global construction and development company Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with strong focus on Azure Cloud. Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. Advanced PySpark/Spark experience for complex transformations and performance optimization. Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. Preferred Skills Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
    $70k-93k yearly est. 3d ago
  • Senior Data Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    Our direct client has a long-term contract need for a Sr. Data Engineer. Candidate Requirements: Candidates must be local to Columbus, Ohio Candidates must be willing and able to work the following: Hybrid schedule (3 days in office & 2 days WFH) The team is responsible for the implementation of the new Contract Management System (FIS Asset Finance) as well as the integration into the overall environment and the migration of data from the legacy contract management system to the new system. Candidate will be focused on the delivery of data migration topics to ensure that high quality data is migrated from the legacy systems to the new systems. This may involve data mapping, SQL development and other technical activities to support Data Migration objectives. Must Have Experience: Strong C# and SQL Server design and development skills. Analysis Design. IMPORTANT MUST HAVE! Strong technical analysis skills Strong collaboration skills to work effectively with cross-functional teams Exceptional ability to structure, illustrate, and communicate complex concepts clearly and effectively to diverse audiences, ensuring understanding and actionable insights. Demonstrated adaptability and problem-solving skills to navigate challenges and uncertainties in a fast-paced environment. Strong prioritization and time management skills to balance multiple projects and deadlines in a dynamic environment. In-depth knowledge of Agile methodologies and practices, with the ability to adapt and implement Agile principles in testing and delivery processes. Nice to have: ETL design and development; data mapping skills and experience; experience executing/driving technical design and implementation topics
    $88k-120k yearly est. 5d ago
  • Senior Data Analytics Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization. You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services. Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS). Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools. Implement best practices for data quality, validation, auditing, and error-handling within pipelines. Analytics Solution Design Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures. Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics. Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc. Cross-Functional Collaboration Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions. Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards. Provide mentorship and guidance to junior engineers and analysts as needed. Engineering (Supporting Skills) Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs. Implement scripts, utilities, and micro-services as needed to support analytics workloads. Required Qualifications 5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles. Expert-level proficiency (10/10) in: Python PySpark Strong working knowledge of: SQL and other programming languages Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS. Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc. Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling. Ability to partner effectively with business stakeholders and translate requirements into technical solutions. Strong problem-solving skills and the ability to work independently in a fast-paced environment. Preferred Qualifications Experience with BI/Visualization tools such as Tableau Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions). Familiarity with machine learning workflows or MLOps frameworks. Knowledge of metadata management, data governance, and data lineage tools.
    $88k-120k yearly est. 5d ago
  • Senior Data Engineer

    Pinnacle Partners, Inc. 4.4company rating

    Indianapolis, IN jobs

    Pinnacle Partners is assisting our client in the search for a Senior Data Engineer to join their team in the Indianapolis, IN area. This successful resource will be responsible for supporting the large-scale data modernization initiative and operationalize the platform moving forward. RESPONSIBILITIES: Design, develop, and refine BI focused data architecture and data platforms Work with internal teams to gather requirements and translate business needs into technical solutions Build and maintain data pipelines supporting transformation Develop technical designs, data models, and roadmaps Troubleshoot and resolve data quality and processing issues Create and maintain detailed documentation for data warehouses, data stores, and end-to-end data flows Mentor and support junior team members REQUIREMENTS: 5+ years of hands-on experience with data warehousing, databases, and dimensional data modeling 5+ years of experience across end-to-end data analysis and development Experience using GIT version control Advanced SQL skills Strong experience with AWS cloud PREFERRED SKILLS: Experience with Snowflake Experience with Python or R Bachelor's degree in an IT-Related field TERMS: This is a direct hire opportunity with a salary up to $130K based on experience. They offer benefits including medical, dental, and vision along with generous PTO, 401K matching, wellness programs, and other benefits.
    $130k yearly 1d ago
  • ML Engineer with Timeseries data experience

    Techstar Group 3.7company rating

    Atlanta, GA jobs

    Role: ML Engineer with Timeseries data experience Hybrid in Atlanta, GA (locals preferred) $58/hr on C2C, Any Visa Model Development: Design, build, train, and optimize ML/DL models for time-series forecasting, prediction, anomaly detection, and causal inference. Data Pipelines: Create robust data pipelines for collection, preprocessing, feature engineering, and labeling of large-scale time-series data. Scalable Systems: Architect and implement scalable AI/ML infrastructure and MLOps pipelines (CI/CD, monitoring) for production deployment. Collaboration: Work with data engineers, software developers, and domain experts to integrate AI solutions. Performance: Monitor, troubleshoot, and optimize model performance, ensuring robustness and real-world applicability. Languages & Frameworks: Good understanding of AWS Framework, Python (Pandas, NumPy), PyTorch, TensorFlow, Scikit-learn, PySpark. ML/DL Expertise: Strong grasp of time-series models (ARIMA, Prophet, Deep Learning), anomaly detection, and predictive analytics Data Handling: Experience with large datasets, feature engineering, and scalable data processing.
    $58 hourly 4d ago
  • LEAD SNOWFLAKE DATA ENGINEER

    KPG99 Inc. 4.0company rating

    Minneapolis, MN jobs

    Job Title: Lead Snowflake Data Engineer Employment Type: 6-month Contract-to-Hire Work Arrangement: On-site (4 days/week) Eligibility: U.S. Citizen or Green Card holders only Experience Level: 7+ years Role Overview We are seeking a Lead Snowflake Data Engineer to design, build, and optimize modern cloud-based data platforms. This role requires deep hands-on expertise with Snowflake, strong SQL skills, cloud data engineering experience, and the ability to lead and mentor a team of data engineers. Required Qualifications 7+ years of experience in data engineering or related roles 5-10 years of hands-on experience with Snowflake Strong proficiency in SQL, including complex query development and stored procedures Experience with automation and scripting (e.g., Python, Shell, or similar) Hands-on experience with data ingestion and transformation frameworks Strong understanding of Snowflake architecture, including storage, compute, security, and infrastructure Proven experience with Snowflake troubleshooting and performance tuning Experience with cloud platforms such as AWS and/or Azure Solid understanding of Cloud Data Lakehouse architectural patterns Experience leading, mentoring, and providing technical direction to data engineering teams Proven ability to work closely with business partners to develop and manage data domains Preferred / Additional Skills Experience in one or more of the following areas is highly desirable: Programming languages (e.g., Python, Java, Scala) Relational and non-relational databases ETL / ELT tools and frameworks Data storage solutions (on-premises and cloud-based) Big data technologies Machine learning or advanced analytics Data modeling and data visualization tools Cloud computing and data security best practices
    $82k-112k yearly est. 4d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    Charlotte, NC jobs

    C# Senior Developer RESOURCE TYPE: W2 Only Charlotte, NC - Hybrid Mid (5-7 Years) Role Description A leading Japanese bank is in the process of driving a Digital Transformation across its Americas Division as it continues to modernize technology, strengthen its data-driven approach, and support future growth. As part of this initiative, the firm is seeking an experienced Data Engineer to support the design and development of a strategic enterprise data platform supporting Capital Markets and affiliated securities businesses. This role will contribute to the development of a scalable, cloud-based data platform leveraging Azure technologies, supporting multiple business units across North America and global teams.Role Objectives Serve as a member of the Data Strategy team, supporting broker-dealer and swap-dealer entities across the Americas Division. Participate in the active development of the enterprise data platform, beginning with the establishment of reference data systems for securities and pricing data, and expanding into additional data domains. Collaborate closely with internal technology teams while adhering to established development standards and best practices. Support the implementation and expansion of the strategic data platform on the bank's Azure Cloud environment. Contribute technical expertise and solution design aligned with the overall Data Strategy roadmap. Qualifications and Skills Proven experience as a Data Engineer, with strong hands-on experience in Azure cloud environments. Experience implementing solutions using: Azure Cloud Services Azure Data Factory Azure Data Lake Gen2 Azure Databases Azure Data Fabric API Gateway management Azure Functions Strong experience with Azure Databricks. Advanced SQL skills across relational and NoSQL databases. Experience developing APIs using Python (FastAPI or similar frameworks). Familiarity with DevOps and CI/CD pipelines (Git, Jenkins, etc.). Strong understanding of ETL / ELT processes. Experience within financial services, including exposure to financial instruments, asset classes, and market data, is a strong plus.
    $78k-101k yearly est. 6d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    Charlotte, NC jobs

    Experience Level: Mid (5-7 Years) W2 ONLY - NO 3RD PARTIES PLEASE CONTRACT / C2H Role Objectives • These roles will be part of the Data Strategy team spanning across the Client Capital Markets teams. • These roles will be involved in the active development of the data platform in close coordination with the Client team, beginning with the establishment of a reference data system for securities and pricing data, and later moving to other data domains. • The consulting team will need to follow internal developments standards to contribute to the overall agenda of the Data Strategy team. Qualifications and Skills • Proven experience as a Data Engineer with experience in Azure cloud. • Experience implementing solutions using - • Azure cloud services • Azure Data Factory • Azure Lake Gen 2 • Azure Databases • Azure Data Fabric • API Gateway management • Azure Functions • Well versed with Azure Databricks • Strong SQL skills with RDMS or no SQL databases • Experience with developing APIs using FastAPI or similar frameworks in Python • Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes • Good understanding of ETL/ELT processes • Experience in financial services industry, financial instruments, asset classes and market data are a plus.
    $78k-101k yearly est. 4d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $92k-122k yearly est. 3d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 3d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Jersey City, NJ jobs

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 1d ago

Learn more about Stantec jobs

View all jobs