Career Opportunities: Sr Data Engineer (93674)
Data engineer job at The Freeman Company LLC
About Us Freeman is a global leader in events, on a mission to redefine live for a new era. With a data-driven approach and the industry's largest network of experts, Freeman's insights shape exhibitions, exhibits, and events that drive audiences to action. The integrated full-service solutions leverage a 97-year legacy in event management as well as new technologies to deliver moments that matter.
Summary
We're looking for an experienced and mission-driven Sr Data Engineer. The role will report directly to the Manager of Data Engineering and join the company's transformative IT team. You will be responsible for developing and working with a highly motivated group of data engineers, data architects and data contractor resources covering multiple disciplines, for refining Freeman's transformative systems to meet its customer's needs and contributing to the strategic direction of the business. This position will focus on the development of data platform solutions. Sometimes it will be building upon existing solutions for the enterprise, and other times creating new complex data solutions.
This position will support our Technology Services team. It is eligible to work a hybrid schedule, generally requiring work in-office and/or show site 2-3 days per week. The ideal candidate will be based out of any of the following locations:
* Atlanta, GA
* Dallas, TX
Essential Duties & Responsibilities
* Development, testing, change & configuration management, and automated deployment of data management solutions - data ingestion, data transformation, data virtualization, data model, interfaces, databases, etc.
* System integration across foundational data sources, the on-premises and cloud platforms they reside within, and other third-party data solutions.
* Designing, building, and maintaining ETL/ELT data pipelines
* Working with large datasets, data lakes (lakehouse), and big data technologies, preferably cloud-based, such as Snowflake, Synapse, Databricks, or similar
* Use Git and Azure DevOps Pipelines for deployment automation of data solutions.
* Design and develop data warehouses and data pipelines within Azure and or AWS.
* Act as a key contributor to the design and development lifecycle of analytic applications utilizing Snowflake, AWS, Microsoft Azure and BI technology platforms.
* Participate in Agile ceremonies including daily stand-ups, sprint planning, retrospectives, and product demonstrations
* Produce efficient and elegant code that meets business requirements
* Author unit tests that adhere to code coverage guidelines
* Proactively communicate progress, issues, and risks to stakeholders
* Create and maintain technical documentation
* Performs other related duties as directed
Education & Experience
* Bachelor's degree preferred, High School Diploma or Equivalent with relevant work experience required
* Experience designing Data Lakes and Data Warehouses
* Experience with Snowflake, Azure Synapse and Azure Databricks preferable
* Participate in driving best practices around data engineering and software development process
* Understand data and query optimization, query profiling, and query performance monitoring tools and techniques
* Experience with a range of AWS and/or Azure services, including Infrastructure and Security related services such as Azure AD, IaaS, Containers, Storage, Networking, and Azure Security, Airflow, SNS, SQS, S3
* Familiarity with designing and building data platforms supporting both batch and real-time (event-based) architecture for cloud data platforms is considered a strong plus
* Familiarity with big data platforms, tools, Kafka, predictive modeling, machine learning, etc. is considered a strong plus
* 2-5 years of hands-on experience designing and implementing large-scale distributed data architecture for BI and OLTP systems
* 2-5 years of hands-on experience designing and implementing large-scale data pipelines
* 2-5 years of hands-on experience in Snowflake, AWS and Azure data services
* 2-5 years of hands-on experience with data integration using ETL / ELT tools
* Experience with cloud-based technologies, preferably AWS and Azure, Airflow, SNS, SQS, S3, ADLS Gen 2, Data Factory, Snowflake, Databricks, Synapse Analytics preferred
* Experience with one or more Python parallel processing libraries preferable
* Experience with one or more Python data analysis libraries preferable
* Experience with data integration through APIs, Web Services, SOAP, and/or REST services
* Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence
* Knowledge of SOA and Micros Services Application Architecture
* Ability to work in a fast-paced, collaborative team environment
* Excellent written and verbal communication skills and ability to express ideas clearly and concisely
Travel Requirements
Travel up to 25%
What We Offer
Freeman provides benefits that aim to empower our people and their families to thrive mentally, physically, and financially. These are a handful of the types of programs and benefits our full-time people may be eligible for. There may be some variances in specific benefits across regions.
* Medical, Dental, Vision Insurance
* Tuition Reimbursement
* Paid Parental Leave
* Life, Accident and Disability
* Retirement with Company Match
* Paid Time Off
Diversity Commitment
At Freeman, our commitment to diversity and inclusion is helping us to create not only a great place to work, but also an environment where our employees, our customers and our communities around the world can reach their goals and connect with each other. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, veteran status and other characteristics protected by federal, state or local laws.
#LI-Hybrid
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Head of Data Science & AI
Austin, TX jobs
Duration: 6 month contract-to-hire
Compensation: $150K-160K
Work schedule: Monday-Friday (8 AM-5PM CST) - onsite 3x per week
Benefits: This position is eligible for medical, dental, vision and 401(k)
The Head of Data Science & AI leads the organization's data science strategy and team, driving advanced analytics and AI initiatives to deliver business value and innovation. This role sets the strategic direction for data science, ensures alignment with organizational goals, and promotes a data-driven culture. It involves close collaboration with business and technology teams to identify opportunities for leveraging machine learning and AI to improve operations and customer experiences.
Key Responsibilities
Develop and execute a data science strategy and roadmap aligned with business objectives.
Build and lead the data science team, providing mentorship and fostering growth.
Partner with business leaders to identify challenges and deliver actionable insights.
Oversee design and deployment of predictive models, algorithms, and analytical frameworks.
Ensure data integrity, governance, and security in collaboration with engineering teams.
Communicate complex insights to non-technical stakeholders.
Manage infrastructure, tools, and budget for data science initiatives.
Drive experimentation with emerging AI technologies and ensure ethical AI practices.
Oversee full AI model lifecycle: development, deployment, monitoring, and compliance.
Qualifications
8+ years in data science/analytics with leadership experience.
Expertise in Python, R, SQL, and ML frameworks (TensorFlow, PyTorch, Scikit-Learn).
Experience deploying ML models and monitoring performance.
Familiarity with visualization tools (Tableau, Power BI).
Strong knowledge of data governance, advanced statistical methods, and AI trends.
Skills in project management tools (MS Project, JIRA) and software development best practices (CI/CD, Git, Agile).
Please apply directly to be considered.
Lead Data Engineer
Roseland, NJ jobs
Job Title: Lead Data Engineer.
Hybrid Role: 3 Times / Week.
Type: 12 Months Contract - Rolling / Extendable Contract.
Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements.
Must haves:
AWS.
Databricks.
Lead experience- this can be supplemented for staff as well.
Python.
Pyspark.
Contact Center Experience is a nice to have.
Job Description:
As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams.
TO SUCCEED IN THIS ROLE:
Ability to define and design complex data integration solutions with general direction and stakeholder access.
Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team.
Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake.
Highly skilled in RDBMS platforms such as Oracle, SQLServer.
Familiarity with NoSQL DB platforms like MongoDB.
Understanding of data modeling and data engineering, including SQL and Python.
Strong understanding of data quality, compliance, governance and security.
Proficiency in languages such as Python, SQL, and PySpark.
Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval.
Ability to design and develop scalable data pipelines.
Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable.
Experience with code repository and project tools such as GitHub, JIRA, and Confluence.
Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace.
Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes.
Detail-oriented with strong analytical, problem-solving, and organizational skills.
Ability to clearly communicate with both technical and business teams.
Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus.
Knowledge of Agile development methodologies is a plus.
Having a Databricks data engineer associate certification is a plus but not mandatory.
Data Engineer Requirements:
Bachelor's degree in computer science, information technology, or a similar field.
8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies.
Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features.
Knowledge of mathematical foundations and statistical analysis.
Strong interpersonal skills.
Excellent communication and presentation skills.
Advanced troubleshooting skills.
Regards,
Purnima Pobbathy
Senior Technical Recruiter
************
| ********************* |Themesoft Inc |
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $ 80 - 90 /hour, depending on experience
Inceed has partnered with a great energy company to help find a skilled Data Engineer to join their team!
Join a dynamic team where you'll be at the forefront of data-driven operations. This role offers the autonomy to design and implement groundbreaking data architectures, working primarily remotely. This position is open due to exciting new projects. You'll be collaborating with data scientists and engineers, making impactful contributions to the company's success.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from various sources ensuring consistency and reliability
Develop automation workflows and BI solutions
Mentor others and advise on data process best practices
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of data engineering experience
Experience with PI
Experience with SCADA
Experience with Palantir
Experience with large oil and gas datasets
Proficiency in Python and SQL
Hands-on experience in cloud environments (Azure, AWS, GCP)
Nice to Have Skills & Experience:
Familiarity with Apache Kafka or Flink
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Denver, CO jobs
Data Engineer
Compensation: $80 - $90/hour, depending on experience
Inceed has partnered with a great company to help find a skilled Data Engineer to join their team!
Join a dynamic team as a contract Data Engineer, where you'll be the backbone of data-driven operations. This role offers the opportunity to work with a modern tech stack in a hybrid on-prem and cloud environment. You'll design and implement innovative solutions to complex challenges, collaborating with data scientists, location intelligence experts, and ML engineers. This exciting opportunity has opened due to a new project initiative and you'll be making a tangible impact.
Key Responsibilities & Duties:
Design and deploy scalable data pipelines and architectures
Collaborate with stakeholders to deliver high-impact data solutions
Integrate data from multiple sources ensuring quality and reliability
Develop automation workflows and BI solutions
Mentor others and contribute to the knowledge base
Explore and implement emerging technologies
Required Qualifications & Experience:
8+ years of experience in data engineering
Experience with large oil and gas datasets
Proficiency in SQL and Python
Hands-on experience in cloud environments (Azure, AWS, or GCP)
Familiarity with Apache Kafka, Apache Flink, or Azure Event Hubs
Nice to Have Skills & Experience:
Experience with Palantir Foundry
Knowledge of query federation platforms
Experience with modern data stack tools like dbt or Airflow
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
Hamilton, NJ jobs
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
Senior Data Analytics Engineer (Customer Data)
Irving, TX jobs
Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in remote.
Build, optimize, and maintain customer data pipelines in PySpark/Databricks to support CDP-driven use cases across AWS/Azure/GCP
Transform raw and integrated customer data into analytics-ready datasets used for dashboards, reporting, segmentation, personalization, and downstream AI/ML applications
Develop and enrich customer behavior metrics, campaign analytics, and performance insights such as: ad engagement, lifecycle metrics, retention
Partner with Marketing, Sales, Product, and Data Science teams to translate business goals into metrics, features, and analytical data models
Build datasets consumed by Power BI/Tableau dashboards (hands-on dashboard creation not required)
Ensure high cluster performance and pipeline optimization in Databricks, including troubleshooting skewed joins, sorting, partitioning, and real-time processing needs
Work across multiple cloud and vendor ecosystems such as: AWS/Azure/GCP; Hightouch or comparable CDP vendors
Participate in the data ingestion and digestion phases, shaping integrated data into analytical layers for MarTech and BI
Contribute to and enforce data engineering standards, documentation, governance, and best practices across the organization
Desired Skills/Experience:
6+ years of experience in Data Engineering, Analytics Engineering, or related fields, including data modeling experience
Strong Data Engineering fundamentals with the ability to design pipelines, optimize performance, and deliver real-time or near-real-time datasets
Ability to deeply understand data, identifying gaps, designing meaningful transformations, and creating metrics with clear business context
Understanding of how customer data moves through Customer Data Platforms (CDPs) and how to design pipelines that integrate with them
Experience supporting Marketing, Customer Data, MarTech, CDP, segmentation, or personalization teams strongly preferred
Hands-on experience required with: Databricks, PySpark, Python, SQL, Building analytics datasets for dashboards/reporting and customer behavior analytics or campaign performance insights
Experience designing and implementing features that feed downstream AI or customer-facing applications
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $150-160,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
IT Data Engineer
Lakewood, CO jobs
IT Data Engineer
Compensation: $125k-$155k (DOE)
Inceed has partnered with a great company to help find a skilled IT Data Engineer to join their team!
Join a dynamic team where innovation meets opportunity. This role is pivotal in advancing AI and data modernization initiatives, bridging traditional database administration with cutting-edge AI data infrastructure. The team thrives on collaboration and offers a hybrid work schedule.
Key Responsibilities & Duties:
Design and maintain scalable data pipelines.
Develop RAG workflows for AI information access.
Build secure connectors and APIs for data retrieval.
Monitor and optimize data flows for consistency.
Lead database administration and performance tuning.
Manage database upgrades and storage optimization.
Implement database security controls and standards.
Support application integrations and data migrations.
Define and maintain data models and metadata.
Collaborate with teams to ensure compliance requirements.
Required Qualifications & Experience:
Bachelor's degree in Computer Science or related field.
7+ years in database administration or data engineering.
Advanced SQL and data modeling skills.
Experience with AI and analytics data pipelines.
Familiarity with cloud-based data ecosystems.
Hands-on experience with RAG and vectorization.
Proficiency in scripting languages like Python.
Experience leading vendor-to-internal transitions.
Nice to Have Skills & Experience:
Experience integrating enterprise systems into data platforms.
Knowledge of data governance frameworks.
Understanding of semantic data modeling.
Experience with cloud migration of database workloads.
Perks & Benefits:
This opportunity includes a comprehensive and competitive benefits package-details will be shared during later stages of the hiring process.
Other Information:
Hybrid work schedule
This position requires a background check and drug test
If you are interested in learning more about the IT Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Data Engineer
New York, NY jobs
Role: Data Engineer Type: Contract-to-Hire or Full-Time Domain: Finance preferred, not mandatory
Key Skills & Requirements:
Must-have: Strong Python and Informatica experience
AI/ML Exposure: Familiarity with building and fine-tuning models; not a heavy AI/ML developer role
ETL & Data Engineering: Hands-on experience with ETL pipelines, data modeling, and “data plumbing”
Nice-to-have: Snowflake ecosystem experience, HR systems exposure
Ability to collaborate with business teams handling Data Science
Candidate Profile:
Hands-on Data Engineer comfortable building data pipelines and models
Exposure to AI/ML concepts without being a full AI specialist
Finance domain experience is a plus
Big Data Engineer
Santa Monica, CA jobs
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Senior Data Engineer
Glendale, CA jobs
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Azure Data Engineer
Irving, TX jobs
Our client is seeking an Azure Data Engineer to join their team! This position is located in Irving, Texas. THIS ROLE REQUIRES AN ONSITE INTERVIEW IN IRVING, please only apply if you are local and available to interview onsite.
Duties:
Lead the design, architecture, and implementation of key data initiatives and platform capabilities
Optimize existing data workflows and systems to improve performance, cost-efficiency, identifying and guiding teams to implement solutions
Lead and mentor a team of 2-5 data engineers, providing guidance on technical best practices, career development, and initiative execution
Contribute to the development of data engineering standards, processes, and documentation, promoting consistency and maintainability across teams while enabling business stakeholders
Desired Skills/Experience:
Bachelor's degree or equivalent in Computer Science, Mathematics, Software Engineering, Management Information Systems, etc.
5+ years of relevant work experience in data engineering
Strong technical skills in SQL, PySpark/Python, Azure, and Databricks
Deep understanding of data engineering fundamentals, including database architecture and design, ETL, etc.
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $140-145,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Direct Client:: W2 position:: Need Hadoop Developer with Spark, Python & Healthcare Exp.:: Jacksonville, FL (Hybrid) :: GC and USC Only::
Jacksonville, FL jobs
GC and USC Only
============================
Role: Hadoop/Analytics Developer
Job Description:
A Hadoop developer is responsible for the design, development and operations of systems that store and manage large amounts of data. Most Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science, or mathematics.
IT Developers are responsible for development, programming, coding of Information Technology solutions. IT Developers are responsible for documenting detailed system specifications, participation in unit testing and maintenance of planned and unplanned internally developed applications, evaluation and performance testing of purchased products. IT Developers are responsible for including IT Controls to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application. IT Developers are assigned to moderately complex development projects.
Job Requirements:
• Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures
• Proficiency using versioning tools
• Thorough knowledge of Information Technology fields and computer systems
• Demonstrated organizational, analytical and interpersonal skills
• Flexible team player
• Ability to manage tasks independently and take ownership of responsibilities
• Ability to learn from mistakes and apply constructive feedback to improve performance
• Must demonstrate initiative and effective independent decision-making skills
• Ability to communicate technical information clearly and articulately
• Ability to adapt to a rapidly changing environment
• In-depth understanding of the systems development life cycle
• Proficiency programming in more than one object-oriented programming language
• Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio
• Proficiency using debugging tools
• High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy
Specific Tools/Languages Required:
HADOOP
Spark
Python
SQL/Stored Procedures
Required Work Experience:
•3+ years related work experience, Professional experience with technical design and coding in the IT industry
Data Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Senior Data Engineer
Austin, TX jobs
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Data Scientist
Phoenix, AZ jobs
We are seeking a Data Scientist to support advanced analytics and machine learning initiatives across the organization. This role involves working with large, complex datasets to uncover insights, validate data integrity, and build predictive models. A key focus will be developing and refining machine learning models that leverage sales and operational data to optimize pricing strategies at the store level.
Day-to-Day Responsibilities
Compare and validate numbers across multiple data systems
Investigate discrepancies and understand how metrics are derived
Perform data science and data analysis tasks
Build and maintain AI/ML models using Python
Interpret model results, fine-tune algorithms, and iterate based on findings
Validate and reconcile data from different sources to ensure accuracy
Work with sales and production data to produce item-level pricing recommendations
Support ongoing development of a new data warehouse and create queries as needed
Review Power BI dashboards (Power BI expertise not required)
Contribute to both ML-focused work and general data science responsibilities
Improve and refine an existing ML pricing model already in production
Qualifications
Strong proficiency with MS SQL Server
Experience creating and deploying machine learning models in Python
Ability to interpret, evaluate, and fine-tune model outputs
Experience validating and reconciling data across systems
Strong foundation in machine learning, data modeling, and backend data operations
Familiarity with querying and working with evolving data environments
Data Scientist
Alhambra, CA jobs
Title: Principal Data Scientist
Duration: 12 Months Contract
Additional Information
California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
Job description:
The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts.
Experience Required:
Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments.
Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration.
Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing).
Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring.
Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms).
Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development.
Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases.
One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices.
Education Required & certifications:
This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as:
Microsoft Azure Data Scientist Associate (DP-100)
Databricks Certified Data Scientist or Machine Learning Professional
AWS Machine Learning Specialty
Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: T Saketh Ram Sharma
Email: *****************************
Internal Id: 25-54101
Career Opportunities: Sr Developer SAP (93624)
Data engineer job at The Freeman Company LLC
About Us Freeman is a global leader in events, on a mission to redefine live for a new era. With a data-driven approach and the industry' largest network of experts, Freeman's insights shape exhibitions, exhibits, and events that drive audiences to action. The integrated full-service solutions leverage a 97-year legacy in event management as well as new technologies to deliver moments that matter.
Summary
The Senior Developer SAP ABAP will be responsible for executing activities in all aspects of the Systems Development Life Cycle (SDLC). The Senior Developer SAP ABAP will work on new projects as well as supporting existing applications and be expected to communicate status to stakeholders. The Senior Developer SAP ABAP is expected to influence, persuade, and lead other team members and stakeholders to achieve goals and objectives.
This position will support our IT team. It is eligible to work a hybrid schedule, generally requiring work in-office and/or show-site 2-3 days per week. The ideal candidate will be located in Dallas, TX.
Essential Duties & Responsibilities
Business Analysis:
* Collaborate with the Functional Business Analyst and other business stakeholders to elicit and define assigned requirements.
* Elicit detailed requirements in form of user stories with acceptance criteria.
* Captures non-functional system requirements (e.g., supported browsers, security, system performance expectations, etc.).
* Produce deliverables as needed: (e.g., software requirements documents, use cases, use case diagrams, wireframes, workflow diagrams, process diagrams, and detailed user stories).
* Coordinate with Quality Assurance (QA) and the development team to ensure assigned requirements are understood.
* Attend demos and review developed requirements for accuracy.
* Participate in required testing/validation of system fixes, which may include creating the actual test cases or reviewing test cases created by QA.
* Support User Acceptance Testing.
Systems Analysis/Production Support:
* Plan, design and implement information systems. Perform encoding, testing, debugging and installation activities as necessary.
* Work with organizational end users to clarify current operating procedures.
* Provide functional and technical support for assigned applications, including integration, configuration, maintenance, enhancements, report development, quality assurance, system upgrades, and other activities as required.
* Log and track defect and development requests including message handling and dispatching, resolution tracking, and problem escalation.
* Analyze defects and triage accordingly.
* Work with the BA, QA and development teams as necessary to facilitate defect resolution.
* Assist other teams and business units in supporting ongoing data archival, disaster recovery, and business continuity planning and testing.
* Maintain update-to-date functional and configuration knowledge.
Quality Assurance:
* Ensure that testing activities allow applications to meet business requirements and identify existing or potential issues.
* Create and maintain testing deliverables based on user requirements and technical specifications.
* Tests software in accordance with established quality standards prior to implementation to ensure proper operation and freedom from critical and high defects.
* Analyze test results in order to report any defects, bugs, errors or configuration issues.
* Plan and execute integration, user acceptance and regression testing.
* Communicate testing progress, test results, and other relevant information to project stakeholders.
* Assist in process improvements.
* Cultivate and disseminate knowledge of quality assurance best practices.
* Provide significant input to system specifications and design.
* Develop detailed knowledge/expertise in applications assigned. Will analyze system integration impacts and define integration requirements.
* Collaborate with developers to establish the technical vision and analyze tradeoffs between usability and performance needs.
* Relies on knowledge and professional discretion to achieve goals. May guide day to day activities of others. Significant ingenuity and flexibility is expected.
* Looks for opportunities to improve processes and implements.
* Estimates level of effort, develops analysis work plans, and ensures deliverables are of quality and timeliness.
* Develop partnerships with stakeholders and is viewed as an advocate and trusted advisor.
* Provide leadership and mentoring to team members.
* Performs other duties as assigned.
Education & Experience
* Bachelor's degree preferred, High School Diploma or Equivalent with relevant work experience required.
* Seven (7) or more years of experience in the related field in a large corporation or consulting environment; or equivalent combination of education and experience.
* Seven (7) or more years of SAP ABAP experience in SAP(ECC and S4HANA) implementation production support, and upgrade.
* 2+ years working with SAP BTP, APIs, and middleware to integrate SAP with other enterprise applications.
* Work with RAP Model for efficient development.
* Good Knowledge on SAP S/4 HANA concepts, data models and business processes. Strong knowledge of S/4HANA modules (Finance, OTC, Procurement).
* Good technical understanding of SAP Fiori is an added advantage.
* Develop analysis of technology architecture design in coordination with designers, architects, developers, and business analysts.
* Experience with supporting different types of implementations including: enhancement/support projects, new development projects and third party package software.
* Must have strong PC skills and working knowledge with Microsoft Office products, SharePoint and tracking software.
* Must have strong communication skills, facilitates meetings and work with external consulting companies.
* Provides leadership and mentoring to team members.
* Must be experienced in Best Business Practices.
* Liaison with Business Partners.
Travel Requirements
Travel up to 25%
What We Offer
Freeman provides benefits that aim to empower our people and their families to thrive mentally, physically, and financially. These are a handful of the types of programs and benefits our full-time people may be eligible for. There may be some variances in specific benefits across regions.
* Medical, Dental, Vision Insurance
* Tuition Reimbursement
* Paid Parental Leave
* Life, Accident and Disability
* Retirement with Company Match
* Paid Time Off
Diversity Commitment
At Freeman, our commitment to diversity and inclusion is helping us to create not only a great place to work, but also an environment where our employees, our customers and our communities around the world can reach their goals and connect with each other. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, veteran status and other characteristic protected by federal, state or local laws.
#LI-Hybrid