Data Engineer jobs at Vorys, Sater, Seymour and Pease - 1147 jobs
Senior Software Engineer
Vorys, Sater, Seymour and Pease LLP 4.9
Data engineer job at Vorys, Sater, Seymour and Pease
Precision eControl (PeC) is a wholly owned ancillary business of Vorys, that provides integrated solutions to help brands control the sales of their products in the age of eCommerce. We have represented more than 300 brands, including many of the world's largest companies. PeC's full scope of services allows us to provide a truly comprehensive approach that delivers unique business value.
Position Summary:
The Senior Software Engineer (Front-End) will design, develop, and implement software solutions utilizing Laravel, TailwindCSS, HTML, SQL, and JavaScript. This position is responsible for developing backend and frontend components, database schemas and models, writing/maintaining tests, creating/maintaining deployment pipelines and environments, and responding to support issues and production bugs/outages. At this time, candidates who would work in the following states will not be considered for this role: AZ, CA, CO, CT, DE, DC, HI, IL, MA, ME, MI, MD, MN, NV, NJ, NY, RI, VT, and WA.
Essential Functions:
Develop and maintain front-end applications using Vue, Tailwind CSS, JavaScript, Filament, and related technologies.
Develop and maintain Laravel applications using PHP, Laravel, SQL, and related technologies.
Write and maintain unit tests and automated click tests.
Maintain and develop components for a shared design component library.
Participate in sprint ceremonies, collaborate with product and design.
Debug and troubleshoot issues, including production support, across the backend, frontend, and database components of the application.
Perform code reviews, provide feedback to other engineers, and ensure the quality of the codebase.
Maintain CI/CD pipelines, infrastructure, and databases.
Knowledge, Skills and Abilities Required:
5+ years of experience with Vue (or similar frameworks such as React or Svelte)
3+ years of experience integrating back-end business applications with front-end, preferably PHP/Laravel
Experience developing and maintaining frontend component libraries and working with Product/Design on UX
Experience performing code reviews and providing feedback/mentorship to fellow engineers
Experience debugging frontend and backend issues
Ability to collaborate closely with cross-functional teams, including designers and product managers
Ability to turn designs into responsive frontend code
Demonstrated knowledge of accessibility best practices
Desirable But Not Essential:
Experience building/maintaining design systems
Experience with TailwindCSS
Education and Experience:
Bachelor's degree in related discipline or combination of equivalent education and experience.
Bachelor's degree in computer science preferred.
5 - 7 years of experience in similar field.
The expected pay scale for this position is $135,000.00- $160,000.00 and represents our good faith estimate of the starting rate of pay at the time of posting. The actual compensation offered will depend on factors such as your qualifications, relevant experience, education, work location, and market conditions.
At PeC, we are dedicated to fostering a workplace where employees can succeed both personally and professionally. We offer competitive compensation along with a robust benefits package designed to support your health, well-being, and long-term goals. Our benefits include medical, dental, vision, FSA, life and disability coverage, paid maternity & parental leave, discretionary bonus opportunity, family building resources, identity theft protection, a 401(k) plan with discretionary employer contribution potential, and paid sick, personal and vacation time. Some benefits are provided automatically, while others may be available for voluntary enrollment. You'll also have access to opportunities for professional growth, work-life balance, and programs that recognize and celebrate your contributions.
Equal Opportunity Employer:
PeC does not discriminate in hiring or terms and conditions of employment because of an individual's sex (including pregnancy, childbirth, and related medical conditions), race, age, religion, national origin, ancestry, color, sexual orientation, gender identity, gender expression, genetic information, marital status, military/veteran status, disability, or any other characteristic protected by local, state or federal law. PeC only hires individuals authorized for employment in the United States.
PeC is committed to providing reasonable accommodations to qualified individuals in our employment application process unless doing so would constitute an undue hardship. If you need assistance or an accommodation in our employment application process due to a disability; due to a limitation related to, affected by, or arising out of pregnancy, childbirth, or related medical conditions; or due to a sincerely held religious belief, practice, or observance, please contact Julie McDonald, CHRO. Our policy regarding requests for reasonable accommodation applies to all aspects of the hiring process.
#LI-Remote
$135k-160k yearly Auto-Apply 60d+ ago
Looking for a job?
Let Zippia find it for you.
Junior Data Engineer
Brooksource 4.1
Columbus, OH jobs
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level DataEngineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related dataengineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
$86k-117k yearly est. 5d ago
Azure Data Engineer
Kellymitchell Group 4.5
Irving, TX jobs
Our client is seeking an Azure DataEngineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite.
Duties:
Lead the design, architecture, and implementation of key data initiatives and platform capabilities
Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions
Lead and mentor a team of 2-5 dataengineers, providing guidance on technical best practices, career development, and initiative execution
Contribute to the development of dataengineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders
Desired Skills/Experience:
Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc.
5+ years of relevant work experience in dataengineering
Strong technical skills in SQL, PySpark/Python, Azure, and Databricks
Deep understanding of dataengineering fundamentals, including database architecture and design, ETL, etc.
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
$140k-145k yearly 2d ago
Data Scientist
Kellymitchell Group 4.5
Irving, TX jobs
Our client is seeking a Data Scientist to join their team! This position is located in Irving, Texas.
Build, evaluate, and deploy models to identify and target customer segments for personalized experiences and marketing
Design, run, and analyze A/B tests and other online experiments to measure the impact of new features, campaigns, and product changes
Research, prototype, and develop AI solutions for personalized systems and customer segmentation
Design and analyze online controlled experiments (A/B tests) to validate hypotheses and measure business impact
Build, deploy, and analyze AI solutions; perform statistical experiments when deploying new AI products
Desired Skills/Experience:
Bachelor's Degree in Computer Science/Engineering/Math, or relevant experience
2+ years of experience with statistical data science techniques, feature engineering, and customer segmentation
2+ years of experience with SQL, PySpark, and Python
2+ years of experience training, evaluating, and deploying machine learning models
2+ years of experience productionizing and deploying ML workloads in AWS/Azure
Experience working with MarTech platforms such as: CDPs, DMPs, ESPs and integrating data science into marketing workflows
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $115,000-$128,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
$115k-128k yearly 2d ago
Data Architect
Optech 4.6
Cincinnati, OH jobs
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
$75-85 hourly 4d ago
Senior Data Architect
Robert Half 4.5
Houston, TX jobs
Our company is seeking a Senior Data Architect for a fully onsite, 3-month contract opportunity in Houston. This position centers on modernizing core data operations and migrating five analytics models to an Azure Fabric environment.
Key Responsibilities:
Lead the migration of existing models and data pipelines into Azure Fabric infrastructure.
Apply strong data architecture principles to ensure robust, scalable solutions.
Design, build, and optimize ETL pipelines using T-SQL and PySpark to support ongoing modernization.
Modernize, productionize, and operationalize five existing machine learning/data models.
Implement DevOps best practices for deployment, testing, and continuous integration.
Conduct framework and code strategy reviews, focusing on code optimization and maintainability.
Collaborate cross-functionally with data science, engineering, and IT teams to deliver high-impact solutions.
Establish comprehensive testing protocols for reliability and performance.
Requirements:
Proven experience in advanced data architecture and model migration projects.
Expertise working with Azure Fabric infrastructure environments.
Hands-on proficiency in T-SQL and PySpark.
Demonstrated success modernizing and productionizing data models in enterprise environments.
Strong background in DevOps, CI/CD pipelines, and automated testing.
Experience developing and optimizing ETL processes for large-scale data systems.
Ability to deliver results under tight timeframes and changing priorities.
Excellent communication and stakeholder management skills.
$99k-137k yearly est. 4d ago
Senior BI Data Modeler
The Intersect Group 4.2
Dallas, TX jobs
We are seeking a highly skilled Data Modeler / BI Developer to join our team. This role will focus on designing and implementing enterprise-level data models, ensuring data security, and enabling advanced analytics capabilities within our Primoris BI platforms. The ideal candidate will have strong technical expertise, excellent problem-solving skills, and the ability to collaborate effectively with cross-functional teams.
Key Responsibilities
Collaborate with the Data Ingestion team to design and develop the “Gold” layer within a Medallion Architecture.
Design and implement data security and masking standards, processes, and solutions across various data stores and reporting layers.
Build and execute enterprise-level data models using multiple data sources for business analytics and reporting in Power BI.
Partner with business leaders to identify and prioritize data analysis and platform enhancement needs.
Work with analytics teams and business leaders to determine requirements for composite data models.
Communicate data model structures to visualization and analytics teams.
Develop and optimize complex DAX expressions and SQL queries for data manipulation.
Troubleshoot and resolve issues, identifying root causes to prevent recurrence.
Escalate critical issues when appropriate and ensure timely resolution.
Contribute to the evolution of Machine Learning (ML) and AI model development processes.
Qualifications
Bachelor's degree in Business Administration, Information Technology, or a related field.
2+ years experience ensuring data quality (completeness, validity, consistency, timeliness, accuracy).
2+ years experience organizing and preparing data models for analysis using systematic approaches.
Demonstrated experience with AI-enabled platforms for data modernization.
Experience delivering work using Agile/Scrum practices and software release cycles.
Proficient in Azure, Databricks, SQL, Python, Power BI, and DAX.
Good knowledge of CI/CD and deployment processes.
3+ years experience working with clients and delivering under tight deadlines.
Prior experience with projects of similar size and scope.
Ability to work independently and collaboratively in a team environment.
Skills & Competencies
Exceptional organizational and time management skills.
Ability to manage stakeholder expectations and influence decisions.
High attention to detail and commitment to quality.
Strong leadership and team-building capabilities.
Ability to adapt to changing priorities and work under pressure.
$81k-113k yearly est. 4d ago
Software Engineer
Impower.Ai 3.8
Columbus, OH jobs
Software Engineer - Internal Product Team
Division: Impower Solutions (Agility Partners)
About Impower
Impower is the technology consulting division of Agility Partners, specializing in automation & AI, dataengineering & analytics, software engineering, and digital transformation. We deliver high-impact solutions with a focus on innovation, efficiency, and client satisfaction.
Role Overview
We're building a high-performing internal product team to scale our proprietary tech stack. As a Software Engineer, you'll contribute to the development of internal platforms using modern technologies. You'll collaborate with product and engineering peers to deliver scalable, maintainable solutions that drive Impower's consulting capabilities.
Key Responsibilities
Development & Implementation
Build scalable APIs using TypeScript and Bun for high-performance backend services.
Develop intelligent workflows and AI agents leveraging Temporal, enabling robust orchestration and automation.
Move and transform data using Python and DBT, supporting analytics and operational pipelines.
Contribute to full-stack development of internal websites using Next.js (frontend), Elysia (API layer), and Azure SQL Server (database).
Implement CI/CD pipelines using GitHub Actions, with a focus on automated testing, secure deployments, and environment consistency.
Deploy and manage solutions in Azure, including provisioning and maintaining infrastructure components such as App Services, Azure Functions, Storage Accounts, and SQL databases.
Monitor and troubleshoot production systems using SigNoz, ensuring observability across services with metrics, traces, and logs to maintain performance and reliability.
Write clean, testable code and contribute to unit, integration, and end-to-end test suites.
Collaborate in code reviews, sprint planning, and backlog grooming to ensure alignment and quality across the team.
Innovation & Strategy
Stay current with emerging technologies and frameworks, especially in the areas of agentic AI, orchestration, and scalable infrastructure.
Propose improvements to internal platforms based on performance metrics, developer experience, and business needs.
Contribute to technical discussions around design patterns, tooling, and long-term platform evolution.
Help evaluate open-source tools and third-party services that could accelerate development or improve reliability.
Delivery & Collaboration
Participate in agile ceremonies including sprint planning, standups, and retrospectives.
Collaborate closely with product managers, designers, and other engineers to translate requirements into working solutions.
Communicate progress, blockers, and technical decisions clearly and proactively.
Take ownership of assigned features and enhancements from ideation through deployment and support.
Leadership
Demonstrate ownership and accountability in your work, contributing to a culture of reliability and continuous improvement.
Share knowledge through documentation, pairing, and informal mentoring of junior team members.
Engage in code reviews to uphold quality standards and foster team learning.
Actively participate in team discussions and help shape a collaborative, inclusive engineering culture.
Qualifications
2-4 years of experience in software engineering, ideally in a product-focused or platform engineering environment.
Proficiency in TypeScript and Python, with hands-on experience in full-stack development.
Experience building APIs and backend services using Bun, Elysia, or similar high-performance frameworks (e.g., Fastify, Express, Flask).
Familiarity with Next.js for frontend development and Azure SQL Server for relational data storage.
Experience with workflow orchestration tools such as Temporal, Airflow, or Prefect, especially for building intelligent agents or automation pipelines.
Proficiency in data transformation using DBT, with a solid understanding of analytics engineering principles.
Strong understanding of CI/CD pipelines using GitHub Actions, including automated testing, environment management, and secure deployments.
Exposure to observability platforms such as SigNoz, Grafana, Prometheus, or OpenTelemetry, with a focus on metrics, tracing, and log aggregation.
Solid grasp of software testing practices and version control (Git).
Excellent communication skills, a collaborative mindset, and a willingness to learn and grow within a team.
Why Join Us?
Build impactful internal products that shape the future of Impower's consulting capabilities.
Work with cutting-edge technologies in a collaborative, innovation-driven environment.
Enjoy autonomy, growth opportunities, and a culture that values excellence and people.
$57k-75k yearly est. 5d ago
DevOps Engineer
Russell Tobin 4.1
Plano, TX jobs
Job Title: DevOps Engineer
Duration: 6 months with possible extension
An ideal candidate must have 3-5 years of experience in networking, Automated Testing, maintaining CI/CD pipelines & certificate chains (the relationship between a root certificate and an intermediate certificate),
Top 3 Skills:
Python, Java & AWS Must understand containerization, how to create profiles to build containers.
Experience with Certificate Chains, both root certificates and intermediate certificates
Sectigo certificate would be an advantage.
Who we're looking for:
Enterprise Tools team is seeking a highly motivated Contractor to help our growing Enterprise tools needs. This position will be responsible for building world-class platform engineering for our development teams.
What you'll be doing
Technology Strategy and Execution:
Build reports to detail container configurations, deployments, and usage in AWS.
Work with team members to implement technology solutions that enhance container development.
Automate the building and update of container images with industry best practices and contribute to technology improvements.
Participate in technical decision making and provide expertise in implementation approaches.
Self-Service and Automation:
Implement and maintain self-service capabilities and automation solutions.
Execute process improvements and automation initiatives to increase efficiency.
Provide technical guidance on containerization best practices.
Requirements:
What you bring
Proven experience in designing, developing, and maintaining cloud-native platforms and services using AWS or other cloud providers.
Must have hands-on experience working with GitHub to generate reports using the GitHub API, pull requests, and automation.
Must have hands-on experience with building, testing, and deploying container images.
Strong proficiency in programming and scripting languages such as Python, Java, Bash or Groovy for automation.
Strong working knowledge and hands on experience with AWS to deploy and monitor containerized applications running on Kubernetes, ECS, and EC2 instances.
Strong working knowledge of certificates, keystores, and networking to establish communications between systems.
$95k-123k yearly est. 1d ago
Audiovisual Engineer
Cornerstone Technology Talent Services 3.2
Fort Worth, TX jobs
Job Title: AV Technician (Contract)
Job Type: Contract
Worksite Requirement: 100% Onsite
We are seeking a contract AV Engineer to support conference room upgrades and handle daily break/fix AV support in a professional office environment. The ideal candidate will have experience with Teams Rooms and Zoom Rooms, be comfortable troubleshooting AV systems, and capable of assisting with or overseeing wall-mounted display installations. This role requires a proactive approach to resolving AV issues efficiently and professionally.
Responsibilities:
Respond to and manage AV support tickets
Provide break/fix support for projectors, displays, and conference room AV systems
Troubleshoot and maintain Teams Rooms and Zoom Rooms, including top-level issue diagnosis
Oversee and assist with installation and replacement of wall-mounted displays
Maintain accurate records of AV issues, repairs, and system changes
Collaborate with IT or Facilities teams as needed to complete tasks
Requirements:
Minimum of 2 years of experience in AV engineering or support
Hands-on experience with projectors, displays, Teams Rooms, and Zoom Rooms
Ability to diagnose AV issues and implement reliable solutions
Comfortable working with physical hardware, including mounting and lifting AV equipment
Professional communication and customer service skills
Preferred Qualifications:
Familiarity with control systems such as Crestron or Extron
Basic understanding of networking as it relates to AV functionality
CTS or other relevant AV certifications
Additional Information:
This is a full-time contract role based entirely onsite in Fort Worth, TX. You will be working as part of a collaborative and responsive team in a professional setting. Candidates must be local or able to commute daily.
$77k-109k yearly est. 2d ago
N4 Engineer
Educated Solutions Corp 3.9
Dallas, TX jobs
Our client, a leader in Building Automation Controls Systems is seeking a Senior BAS Application Engineer. The purpose of this position is to deliver the startup, operation and execution for a multitude of BAS projects. Engineers on this team use industry knowledge to deliver field level programming to implement and finalize projects. This role pays in the $100K - $115K range based on experience and location, includes a guaranteed 14% annual bonus program, a strong benefit program and requires varied travel based on team/role (See Below). Company headquarters are located in WI, but this role can be based at any location in the US and is “WORK FROM HOME” when not traveling to client locations. Home base is suggested to be near a major airport anywhere in the WEST United States.
The key to this role is solid experience with Tridium/Niagara (N4) software and strong ability to create control databases, user interfaces and perform setup of control systems based on project specifications and/or sales proposals. Senior Engineers will mentor and lead teams within this process and act as subject matter experts within this discipline. Incumbents of this role will also:
Understand building controls and HVAC systems and their terminology.
Possess a thorough knowledge of the use, setup and operation of Windows-based computers and desktop applications such as MS-Word and MS-Excel.
Use proficiency in reading BAS and MEP drawings to determine if the drawing and programming required will work together.
Perform field startup and system commissioning tasks.
Provide on-site and remote technical support to installers and customers.
Create programming logic using flow diagrams, sequences of operation, understanding panel layouts, termination details and project specifications or sales proposal.
Program control applications using various software tools to support operator workstations, DDC field panels and third-party integration devices connected through multiple communications protocols.
Deliver on-site and remote installation of software and control programs.
Perform job site system checkout, commissioning and testing of control applications to verify proper operation according to project specifications, sales proposal and design documentation.
Develop system user interfaces, according to project specifications or sales proposal.
Provide on-site and remote technical support to installers and customers.
Act as the technical liaison between owner/construction managers.
Deliver on-site and remote end user training for the use of the installed system.
Perform advanced system analysis and diagnostics.
Determine corrective action to restore systems to proper operating condition.
Coordinates system installation with installing contractor at job site as required.
Perform final walkthrough with owner and construction manager to ensure all punch list items are complete and job received signoff of substantial completion.
QUALIFICATIONS
Bachelor's Degree in HVAC, Electrical, Mechanical or Software Engineering OR equivalent experience in the BAS Engineering realm
3+ years' experience in Building Automation Controls, Building Automation Sales, and/or Account/Project Management.
3+ years Tridium Niagara 4 / Niagara AX Experience
3+ years Field Device Programming with any of the following: Distech / Alerton / Schneider Electric / Johnson Controls / Siemens / Honeywell.
Expert knowledge in 3 or more of the following CORE BAS Skills: Niagara 4 Software (must have), DGLux5 Software, Distech Controls Software, API Protocols, BACnet Protocol, Modbus Protocol, Lon Protocol.
Knowledge of HVAC System Sequences including Rooftop / Air Handling / Central Plants
Proficiency with desktop applications such as MS-Word and MS-Excel.
Knowledge of Google drive and its associated applications is a plus.
TRAVEL INFO
This role requires travel at some degree but this is very much based on projects/clients and team makeup. If you greatly desire travel, the need is there, and you will be scheduled at 50% - 26 full weeks - Monday through Friday. Because this is undesirable to many the target is at least 18 weeks - a 35% requirement. However, based on client and project needs there can be busier times which might average 2-3 weeks a month, but during the slower times it would be closer to 1 week a month. Travel amounts in a week do vary as well. It can often be the full week, Monday through Friday, but it can be just for a night or two in that week. To assist with work/life balance no travel is planned over the weekends and you will fly home each weekend. To be considered for this role, you should expect travel somewhere between 18 and 26 weeks a year and 95% of the travel is “planned,” meaning you will know in advance when and where you will be going. Some of the travel will include trips to the corporate office in WI.
$100k-115k yearly 4d ago
Data Scientist (Remote)
Elder Research 3.9
Arlington, VA jobs
Job Title: Data Scientist Workplace: Remote (preference for candidates located in the National Capital Region - DMV) Clearance Required: a TS or CBP BI or DHS Suitability clearance adjudicated within the past 4 years Supports U.S. Customs and Border Protection. As a Data Scientist, with little or no supervision, apply expert knowledge in statistical analysis, complex data mining, and artificial intelligence to make value out of data. Provide consulting relating to the data mining and analysis of data from a range of sources to transform raw data into concise and actionable insights. Design and implement data-driven solutions, with specific focus on advanced analytical methods, data models, and visualizations. Develop quantitative simulations and models to provide descriptive and predictive analytics solution recommendations. Identify trends and problems through complex big data analysis. Maintain current in emerging tools and techniques in machine learning, statistical modeling, and analytics.
Requirements:
* Six (6) years of relevant experience in applied research, big data analytics, statistics, applied mathematics, data science, computer science, operations research or other closely related other quantitative or mathematical discipline. At least three (3) years of direct experience in machine learning.
* Advanced Degree (Masters or PhD) in Statistics, Applied Mathematics, Data Science, Computer Science, Operations Research or other closely related other quantitative or mathematical discipline. A PhD degree may be substituted for up to three (3) years of relevant experience.
* Demonstrates knowledge of data mining methods, databases, data visualization and machine learning.
* Ability to communicate analysis techniques, concepts and products.
* Ability to develop data-driven solutions, data models, and visualizations
Preferred Experience and Skills:
* Expertise using Qlik to embed visualizations into webpages
* Familiarity with Databricks or similar cloud-based distributed database technologies
* Familiarity with PySpark and Python
* Comfortable developing complex SQL queries to extract, transform, and load data
* Experience with analytic techniques such as Anomaly detection, Clustering, and Time-series (e.g., ARIMA)
* Experience implementing NLP concepts including preprocessing (stemming, etc.), TF-IDF, Named Entity Recognition, and LLMs.
Why apply to this position at Elder Research?
* Competitive Salary and Benefits
* Important Work / Make a Difference: supporting Customs and Border Protection in their efforts to protect the United States.
* Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract.
* Remote Work: in an industry of declining remote work opportunities.
* People-Focused Culture: we prioritize work-life-balance and provide a positive, supportive work environment as well as opportunities for professional growth and advancement.
* Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits.
$76k-111k yearly est. 60d+ ago
Senior Data Engineer
Novalink Solutions 3.1
Austin, TX jobs
Understands business objectives and problems, identifies alternative solutions, performs studies and cost/benefit analysis of alternatives. Analyzes user requirements, procedures, and problems to automate processing or to improve existing computer system: Confers with personnel of organizational units involved to analyze current operational procedures, identify problems, and learn specific input and output requirements, such as forms of data input, how data is to be; summarized, and formats for reports. Writes detailed description of user needs, program functions, and steps required to develop or modify computer program. Reviews computer system capabilities, specifications, and scheduling limitations to determine if requested program or program change is possible within existing system.
The Department of Information Resources (DIR) requires the services of a DataEngineer hereafter referred to as Worker, who meets the general qualification of Systems Analyst 3, Emerging Technologies and the specifications outlined in this document for Health and Human Services Commission (HHSC) Information Technology.
All work products resulting from the project shall be considered “works made for hire” and are the property of the HHSC. HHSC may include pre -selection requirements that potential Vendors (and their Workers) submit to and satisfy criminal background checks as authorized by the Texas law. HHSC will pay no fees for interviews or discussions, which occur during the process of selecting a Worker(s).
HHSC IT is continuing to develop an HHS data integration hub with a goal to accomplish the following:
• Develop the DAP/PMAS Report on Medicaid Personal Care Services
• Implementation and configuration of the infrastructure for the data integration hub
• Design, development, and implementation (DD&I) of the data integration hub using an agile methodology for all standard SDLC phases that includes, but is not limited to:
o Validation of performance metric requirements
o Creation of Epics/User Stories/Tasks
o Automation of data acquisition from a variety of data sources
o Development of complex SQL scripts
o Testing - integration, load and stress
o Deployment / publication internally and externally
• Operations support and enhancement of the data integration hub
This development effort will utilize an agile methodology based upon the approach currently in use at HHSC for the Performance Management & Analytics System (PMAS).
As a member of the agile development team, the worker responsibilities may include:
• Filling the role of a technical leader, leading an agile development team through a project.
• Data acquisition from a variety of data sources for multiple uses.
• Developing complex SQL scripts to transform the source data to fit into a dimensional model, then to create views and materialized views in Oracle.
• Developing automation with Informatica Power Center/IICS to pull data from external data sources and transform it to fit into a dimensional model.
• Collaborating with other members of the DataEngineering Team on the design and implementation of an optimal data design.
• Verification and validation of SQL scripts, Informatica automation and database views.
• Developing automated means of performing verification and validation.
• Participating in all sprint ceremonies
• Work closely with the Architects and DataEngineering Team on implementation designs and data acquisition strategies.
• Develop mockups and work with customers for validation
• Working closely with other members of the team to address technical problems
• Assisting with the implementation and configuration of developmental tools
• Producing and maintaining technical specifications, diagrams, or other documentation as needed to support the DD&I efforts
• Participation in requirements and design sessions
• Interpreting new and changing business requirements to determine the impact and proposing enhancements and changes to meet these new requirements
• All other duties as assigned.
II. CANDIDATE SKILLS AND QUALIFICATIONS
Minimum Requirements:
Candidates that do not meet or exceed the minimum stated requirements (skills/experience) will be displayed to customers but may not be chosen for this opportunity.
Years
Required/Preferred
Experience
8
Required
Experience developing mappings and workflows to automate ETL processes using Informatica Power Center or IICS.
8
Required
Experiencing acquiring and integrating data from multiple data sources/technologies using Informatica Power Center or IICS for use by a Tableau data visualization object. Data source techs should include Oracle, SQL Server, Excel, Access and Adobe PDF.
8
Required
Experience designing and developing complex Oracle and/or Snowflake SQL scripts that are fast and efficient
8
Required
Strong analytical and problem -solving skills with experience as a system analyst for a data analytics, performance management system, or data warehousing project.
8
Required
Technical writing and diagraming skills, including proficiency with modeling and mapping tools (e.g., Visio, Erwin), and the Microsoft Office Suite (Word, Excel, and PowerPoint) and MS Project.
8
Required
Experience in planning and delivering software platforms used across multiple products and organizational units.
8
Required
Proven ability to write well designed, testable, efficient code by using best software development practices
6
Preferred
Excellent oral and written communication skills.
6
Preferred
Effectively manage multiple responsibilities, prioritize conflicting assignments, and switch quickly between assignments, as required.
4
Preferred
Experience on an agile sprint team
4
Preferred
Understanding of security principles and how they apply to healthcare data
4
Preferred
Experience with state of the art software components for a performance metrics data visualization or business intelligence environment
4
Preferred
Bachelor's degree in Computer Science, Information Systems, or Business or equivalent experience.
4
Preferred
Prior experience in the Healthcare Industry
4
Preferred
Prior experience with an HHS agency
4
Preferred
Prior experience working with PII or PHI data
4
Preferred
Experience designing and developing scripts using Python
4
Preferred
Experience with JIRA software
2
Preferred
Functional knowledge or hands on design experience with Web Services (REST, SOAP, etc.)
2
Preferred
Experience designing and developing code using Java and JavaScript
2
Preferred
Experience developing CI/CD pipelines with GitHub and Git Actions
2
Preferred
Experience as a Mulesoft developer
2
Preferred
Experience developing code in C#
$84k-118k yearly est. 24d ago
Google Cloud Data & AI Engineer
Slalom 4.6
Dallas, TX jobs
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for dataengineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of dataengineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging dataengineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional DataEngineer, Professional DatabaseEngineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
$145k-217.5k yearly 9d ago
Google Cloud Data & AI Engineer
Slalom 4.6
Houston, TX jobs
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for dataengineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of dataengineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging dataengineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional DataEngineer, Professional DatabaseEngineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
$145k-217.5k yearly 9d ago
Google Cloud Data & AI Engineer
Slalom 4.6
Austin, TX jobs
Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies
You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients.
What You'll Do
* Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more.
* Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs.
* Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem.
* Provide technical leadership and guidance on Google Cloud best practices for dataengineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps).
* Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud.
* Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients.
* Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices.
What You'll Bring
* Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.).
* Strong knowledge of dataengineering concepts, such as ETL processes, data warehousing, data modeling, and data governance.
* Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML.
* Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment.
* Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging dataengineering and AI efforts.
* Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects.
* Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously.
* Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud.
* Google Cloud certifications (e.g., Professional DataEngineer, Professional DatabaseEngineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time.
East Bay, San Francisco, Silicon Valley:
* Consultant $114,000-$171,000
* Senior Consultant: $131,000-$196,500
* Principal: $145,000-$217,500
San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC:
* Consultant $105,000-$157,500
* Senior Consultant: $120,000-$180,000
* Principal: $133,000-$199,500
All other locations:
* Consultant: $96,000-$144,000
* Senior Consultant: $110,000-$165,000
* Principal: $122,000-$183,000
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We are accepting applications until 12/31.
#LI-FB1
$145k-217.5k yearly 9d ago
Senior Data Engineer
Icon Mechanical 4.8
Austin, TX jobs
ICON is seeking a Senior DataEngineer to join our Data Intelligence & Systems Architecture (DISA) team. This engineer will play a foundational role in shaping ICON's enterprise data platform within Palantir Foundry, owning the ingestion, modeling, and activation of data that powers reporting, decision-making, and intelligent automation across the company.
You will work closely with teams across Supply Chain & Manufacturing, Finance & Accounting, Human Resources, Software, Field Operations and R&D to centralize high-value data sources, model them into scalable assets, and enable business-critical use cases, ranging from real-time reporting to operations-focused AI/ML solutions. This is a highly cross-functional and technical role, ideal for someone with strong dataengineering skills, deep business curiosity, and a bias toward action. This role is based at ICON's headquarters in Austin, TX and reports to the Senior Director of Operations.
RESPONSIBILITIES:
Lead data ingestion and transformation pipelines within Palantir Foundry, integrating data from internal tools, SaaS platforms, and industrial systems
Model and maintain high-quality, governed data assets to support use cases in reporting, diagnostics, forecasting, and automation
Build analytics frameworks and operational dashboards that give teams real-time visibility into project progress, cost, equipment status, and material flow
Partner with business stakeholders and technical teams to translate pain points and questions into scalable data solutions
Drive the development of advanced analytics capabilities, including predictive maintenance, proactive purchasing workflows, and operations intelligence
Establish best practices for pipeline reliability, versioning, documentation, and testing within Foundry and across the data platform
Mentor team members and contribute to a growing culture of excellence in data and systems engineering
RESPONSIBILITIES:
8+ years of experience in dataengineering, analytics engineering, or backend software development
Bachelor's degree in Computer Science, DataEngineering, Software Engineering, or a related technical field.
Strong hands-on experience with Palantir Foundry, including Workshop, Code Repositories, Ontologies, and Object Models
Proficiency in Python and SQL for pipeline development and data modeling
Experience integrating data from APIs, machine data sources, ERP systems, SaaS tools, and cloud storage platforms
Strong understanding of data modeling principles, business logic abstraction, and stakeholder collaboration
Proven ability to independently design, deploy, and scale data products in fast-paced environments
PREFERRED SKILLS AND EXPERIENCE:
Experience supporting Manufacturing, Field Operations, or Supply Chain teams with near real-time analytics
Familiarity with platforms such as Procore, Coupa, NetSuite, or similar
Experience building predictive models or workflow automation in or on top of enterprise platforms
Background in data governance, observability, and maintaining production-grade pipelines
ICON is an equal opportunity employer committed to fostering an innovative, inclusive, diverse and discrimination-free work environment. Employment with ICON is based on merit, competence, and qualifications. It is our policy to administer all personnel actions, including recruiting, hiring, training, and promoting employees, without regard to race, color, religion, gender, sexual orientation, gender identity, national origin or ancestry, age, disability, marital status, veteran status, or any other legally protected classification in accordance with applicable federal and state laws. Consistent with the obligations of these laws, ICON will make reasonable accommodations for qualified individuals with disabilities.
Furthermore, as a federal government contractor, the Company maintains an affirmative action program which furthers its commitment and complies with recordkeeping and reporting requirements under certain federal civil rights laws and regulations, including Executive Order 11246, Section 503 of the Rehabilitation Act of 1973 (as amended) and the Vietnam Era Veterans' Readjustment Assistance Act of 1974 (as amended).
Headhunters and recruitment agencies may not submit candidates through this application. ICON does not accept unsolicited headhunter and agency submissions for candidates and will not pay fees to any third-party agency without a prior agreement with ICON.
As part of our compliance with these obligations, the Company invites you to voluntarily self-identify as set forth below. Provision of such information is entirely voluntary and a decision to provide or not provide such information will not have any effect on your employment or subject you to any adverse treatment. Any and all information provided will be considered confidential, will be kept separate from your application and/or personnel file, and will only be used in accordance with applicable laws, orders and regulations, including those that require the information to be summarized and reported to the federal government for civil rights enforcement purposes.
Internet Applicant Employment Notices
$82k-114k yearly est. Auto-Apply 11d ago
Senior Data Engineer
Apidel Technologies 4.1
Blue Ash, OH jobs
Job Description
The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others.
Current tools and technologies include:
Databricks and Netezza
Key Responsibilities
Lead and participate in the design and implementation of large and/or architecturally significant applications.
Champion company standards and best practices. Work to continuously improve software delivery processes and practices.
Build partnerships across the application, business and infrastructure teams.
Setting up new customer data platforms from Netezza to Databricks
Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks.
Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality.
Support and maintain applications utilizing required tools and technologies.
May direct the day-to-day work activities of other team members.
Must be able to perform the essential functions of this position with or without reasonable accommodation.
Work quickly with the team to implement new platform.
Be onsite with development team when necessary.
Behaviors/Skills:
Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect.
Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens.
Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals
Note to Vendors
Length of Contract 9 months
Top skills Databricks, Netezza
Soft Skills Needed collaborating well with others, working in a team dynamic
Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza
Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD
Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week
Is travel required - No
Max Rate if applicable best market rate
Required Working Hours 8-5 est
Interview process and when will it start Starting with one interview, process may change
Prescreening Details standard questions. Scores will carry over.
When do you want this person to start Looking to hire quickly, the team is looking to move fast.
At Stewart, we know that success begins with great people. As a Stewart employee, you'll be joining a company that was named a 2024-2025 Best Company to Work For by U.S. News & World Report, and a 2025 Top Workplace by USA Today. We are committed to helping you own, develop, and nurture your career. We invest in your career journey because we understand that as you grow, so does our company. And our priority is smart growth - by attaining the best people, investing in tools and resources that enable success, and creating a better home for all.
You will be part of an inclusive work environment that reflects the customers we serve. You'll be empowered to use your unique experiences, passion and skills to help our company and the communities we serve constantly evolve and improve. Together, we can achieve our vision of becoming the premier title and real estate services company.
Stewart is a global real estate services company, providing title insurance, settlement, underwriting, and lender services through our family of companies. To learn more about Stewart, visit stewart.com/about.
More information can be found on stewart.com. Get title industry information and insights at stewart.com/insights. Follow Stewart on Facebook @StewartTitleCo, on Instagram @StewartTitleCo and on LinkedIn @StewartTitle
Job Description
Job Summary
Design, build, and optimize data pipelines across both on-premises SQL Server environments , Azure cloud data services and MS Dynamics Dataverse.
Job Responsibilities
Design, develop, and maintain data pipelines to extract, transform, and load (ETL/ELT) data from multiple sources (on-prem and cloud) into analytical and operational systems.
Provides data-based trends, recommendations and resolutions to the organization including through the optimization of monitoring and logging data pipelines
Performs specialized assignments including the migration and integration of SQL Server data to Azure Data Lake, Azure SQL Database, MS D365 Dataverse, Synapse or Fabric.
Interprets internal/external business environment
Recommends best practices and implement data transformation and processing logic to optimize processes for performance and cost-efficiency using Azure-native tools.
Using understanding of data modeling, normalization, and dimensional modeling concepts
Impacts achievements of customer, operational, project or service objectives by utilizing skills in Python or PySpark for data transformation and automation.
Communicates difficult concepts to team and departments to generate clarity and alignment on projects, initiatives, and various work products
May lead functional projects with moderate risks and resource requirements
Individual contributor working independently; may require guidance in highly complex situations
Performs all other duties as assigned by management
Education
Bachelor's degree in relevant field preferred
Experience
Typically requires 5+ years of related work experience
Strong experience in data movement, transformation, and processing of large-volume datasets.
Proficiency in ETL/ELT frameworks, data orchestration, and automation.
Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure SQL, and Azure Data Lake Storage (ADLS Gen2)
Knowledge of version control (Git) and CI/CD principles for data solutions.
Strong expertise in Microsoft SQL Server (on-prem) performance tuning, stored procedures, and large-scale data processing
Equal Employment Opportunity Employer
Stewart is committed to ensuring that its online application process provides an equal employment opportunity to all job seekers, including individuals with disabilities. If you have a disability and need assistance or an accommodation in the application process, please contact us by email at *******************.
Benefits
Stewart offers eligible employees a competitive benefits package that includes, but is not limited to a variety of health and wellness insurance options and programs, paid time off, 401(k) with company match, employee stock purchase program, and employee discounts.
$76k-100k yearly est. Auto-Apply 36d ago
Unreal Engineer, Gaming
Hired Recruiters 4.1
Austin, TX jobs
You are a game-development professional, eager to be a key contributor on a dynamic, happy team.
We are using Unreal Engine 4. The ideal candidate has great C++ skills in a game development environment, along with Unreal development prowess. But, don't count yourself out if your Unreal development skills are actually more novice than expert, but you are otherwise a true game-dev pro.
Responsibilities
Work with Product to build and maintain the best sports training platform on the market
Work closely with Design, from specification through production
Requirements
Some experience with Unreal Engine 4, even if not part of a commercial product
3+ years professional development experience in the games industry
Fluency in C++
Fantastic debugging skills
Understanding of data structures, algorithms, complexity, and system design
Basic game math fundamentals (vectors, matrices, physics, projections, camera space, tangent space, object space)
Basic understanding of software design patterns
Working knowledge of source control, including best practices (branching/streams)
A practice of code instrumentation, tools, and development KPIs
Bachelors in CS, or equivalent experience
Bonus Qualifications
Understanding of concurrent programming
Basic relational database abilities (SQL, Postgres or AWS RDS)
Experience with Git and JIRA/Confluence
Experience with build systems, continuous integration and deployment
A background working with asset management systems, asset bundles, and in particular downloadable content (DLC)
JOB REQUIRED SKILLS: C++, Unreal, AWS, SQL Must Have
3+ years of gaming engine development
BS CS or equivalent experience (addiitional 3yrs)
Some experience with Unreal Engine 4
Fluency in C++
Basic game math skills (vectors, matrices, physics, projections, camera space, tangent space, object space)
$72k-105k yearly est. 60d+ ago
Learn more about Vorys, Sater, Seymour and Pease jobs