Senior Software Engineer - Mobile
Data engineer job at Genesys
Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements.
We employ more than 6,000 people across the globe who embrace empathy and cultivate collaboration to succeed. And, while we offer great benefits and perks like larger tech companies, our employees have the independence to make a larger impact on the company and take ownership of their work. Join the team and create the future of customer experience together.
As a Senior Software Engineer on the Workforce Engagement Management (WEM) Mobile Development Team, you will be responsible for full-lifecycle development of a Kotlin Multiplatform mobile application currently shipped for Android/iOS. This app has been in the market for multiple years and there is currently a big opportunity to contribute ideas and code that will be used around the world.
Major Responsibilities/Activities:
• Design, development, and testing of features/functions delivered via iOS and Android applications using Kotlin Multiplatform, Jetpack Compose, Compose Multiplatform, Swift, and Swift UI.
• Stay current with industry developments and new trends
• Recommend new technologies as components of a solution when appropriate
• Taking ownership of features beginning to end: from design documents and reviews to acceptance testing and deployment
• Understand and comply with PCI, HIPAA, and GDPR
• Adhere to Genesys Code of Business Conduct and Ethics.
Minimum Requirements
• BS in CSE, IT or ECE
• 3+ years of experience in Software Engineering, with 1+ in mobile applications specifically
• Working experience with building user interfaces
• Experience using REST APIs to build mobile applications
• Experience unit testing mobile applications
• Experience troubleshooting and debugging mobile applications
• Experience with the full application lifecycle from development and testing through deployment and support.
The ideal candidate would also have experience with some of the following:
• Published applications on the various stores that we can look at
• Kotlin or Kotlin/Native Multiplatform
• Swift / Swift UI
• Jetpack Compose
• Compose Multiplatform
• Git
• Building REST APIs
• Working with open-source projects, ideally with a history of contributions
Compensation:
This role has a market-competitive salary with an anticipated base compensation range listed below. Actual salaries will vary depending on a candidate's experience, qualifications, skills, and location. This role might also be eligible for a commission or performance-based bonus opportunities.
$113,100.00 - $210,100.00
Benefits:
Medical, Dental, and Vision Insurance.
Telehealth coverage
Flexible work schedules and work from home opportunities
Development and career growth opportunities
Open Time Off in addition to 10 paid holidays
401(k) matching program
Adoption Assistance
Fertility treatments
Click here to view a summary overview of our Benefits.
If a Genesys employee referred you, please use the link they sent you to apply.
About Genesys:
Genesys empowers more than 8,000 organizations worldwide to create the best customer and employee experiences. With agentic AI at its core, Genesys Cloud™ is the AI-Powered Experience Orchestration platform that connects people, systems, data and AI across the enterprise. As a result, organizations can drive customer loyalty, growth and retention while increasing operational efficiency and teamwork across human and AI workforces. To learn more, visit ****************
Reasonable Accommodations:
If you require a reasonable accommodation to complete any part of the application process, or are limited in your ability to access or use this online application and need an alternative method for applying, you or someone you know may contact us at reasonable.accommodations@genesys.com.
You can expect a response within 24-48 hours. To help us provide the best support, click the email link above to open a pre-filled message and complete the requested information before sending. If you have any questions, please include them in your email.
This email is intended to support job seekers requesting accommodations. Messages unrelated to accommodation-such as application follow-ups or resume submissions-may not receive a response.
Genesys is an equal opportunity employer committed to fairness in the workplace. We evaluate qualified applicants without regard to race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, domestic partner status, national origin, genetics, disability, military and veteran status, and other protected characteristics.
Please note that recruiters will never ask for sensitive personal or financial information during the application phase.
Auto-ApplyApplied AI Interface Engineer
Alexandria, VA jobs
MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications.
Responsibilities include but are not limited to:
Designs and builds User Interfaces using modern UX/UI standards.
Develops, implements, and maintains full-stack software solutions for AI-enabled applications.
Works closely with the Software Architect to understand project requirements and translate them into technical specifications.
Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance.
Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process.
Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members.
Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team.
Minimum Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields.
Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js.
Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows.
Knowledge of Python and REST API frameworks.
Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code.
Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code.
Experience with Generative AI including API access to large language models (LLMs).
Preferred Qualifications:
Experience with Docker, Kubernetes, or other containerization technology.
Experience working in AWS environments.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment.
Experience working with data ingest and transformation.
Clearance Requirements:
Must possess a current and active TS/SCI clearance
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time.
Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Senior CNO Developer
Annapolis, MD jobs
MANTECH seeks a motivated, career and customer-oriented Senior CNO Developer to join our team in Annapolis Junction, Maryland.
We're looking for a Senior Capability Developer to join our elite team. In this role, you'll apply your deep technical expertise to analyze, reverse-engineer, and develop mission-critical capabilities that directly support national security objectives. You will be a key player in a fast-paced environment, tackling unique challenges at the intersection of hardware, software, and embedded systems.
Responsibilities include but are not limited to:
Develop custom software tools and applications using Python, C, and Assembly, focusing on embedded and resource-constrained systems.
Conduct rigorous code reviews to ensure the quality, security, and performance of developed software.
Reverse engineer complex hardware and software systems to understand their inner workings and identify potential vulnerabilities.
Perform in-depth vulnerability research to discover and analyze weaknesses in a variety of targets.
Collaborate with a team of skilled engineers to design and implement innovative solutions to challenging technical problems.
Minimum Qualifications:
Bachelor's degree and 12 years of experience; or, a high school diploma with 16 years of experience; or, an Associate's degree with 14 years of experience. A Master's degree may substitute for 2 years of experience, and a PhD may substitute for 4 years of experience.
Must have 7 years of position-relevant work experience
Proficiency in programming and application development.
Strong scripting skills, particularly in Python, C, and Assembly.
Deep expertise in managing, configuring, and troubleshooting Linux.
Experience in embedded systems.
Experience in reverse engineering and vulnerability research of hardware and software.
Experience in code review.
Preferred Qualifications:
Experience in CNO (Computer Network Operations) Development.
Experience in virtualization.
Knowledge of IoT (Internet of Things) devices.
Experience with Linux Kernel development and sockets.
Knowledge of integrating security tools into the CI/CD (Continuous Integration/Continuous Delivery) pipeline.
Networking skills.
Clearance Requirements:
Must have a current/active Top Secret/SCI clearance.
Physical Requirements:
The person in this position must be able to remain in a stationary position 50% of the time. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations
Snowflake Data Engineer (DBT SQL)
San Jose, CA jobs
Job Description - Snowflake Data Engineer (DBT SQL)
Duration: 6 months
Key Responsibilities
• Design, develop, and optimize data pipelines using Snowflake and DBT SQL.
• Implement and manage data warehousing concepts, metadata management, and data modeling.
• Work with data lakes, multi-dimensional models, and data dictionaries.
• Utilize Snowflake features such as Time Travel and Zero-Copy Cloning.
• Perform query performance tuning and cost optimization in cloud environments.
• Administer Snowflake architecture, warehousing, and processing.
• Develop and maintain PL/SQL Snowflake solutions.
• Apply design patterns for scalable and maintainable data solutions.
• Collaborate with cross-functional teams and tech leads across multiple tracks.
• Provide technical and functional guidance to team members.
Required Skills & Experience
• Hands-on Snowflake development experience (mandatory).
• Strong proficiency in SQL and DBT SQL.
• Knowledge of data warehousing concepts, metadata management, and data modeling.
• Experience with data lakes, multi-dimensional models, and data dictionaries.
• Expertise in Snowflake features (Time Travel, Zero-Copy Cloning).
• Strong background in query optimization and cost management.
• Familiarity with Snowflake administration and pipeline development.
• Knowledge of PL/SQL and SQL databases (additional plus).
• Excellent communication, leadership, and organizational skills.
• Strong team player with a positive attitude.
Sr Data Platform Engineer
Elk Grove, CA jobs
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Senior Data Governance Consultant (Informatica)
Plano, TX jobs
Senior Data Governance Consultant (Informatica)
About Paradigm - Intelligence Amplified
Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference.
Summary:
We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization
The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship
The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
This position is Remote, with occasional travel to Plano, TX
Responsibilities:
Data Governance Frameworks:
Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards
Policy & Standards Development:
Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership
Control Design & Implementation:
Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness
Risk & Compliance Alignment:
Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place
Stakeholder Engagement:
Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes
Data Quality Management:
Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues
Metadata & Lineage:
Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes
Reporting & Governance Committees:
Prepare materials and reporting for data governance forums, risk committees, and senior management updates
Change Management & Training:
Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization
Required Qualifications:
7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred)
Strong knowledge of data policy development, data standards, and control frameworks
Proven experience aligning data governance initiatives with regulatory and compliance requirements
Familiarity with Informatica data governance and metadata tools
Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language
Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks)
Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field
Preferred Qualifications:
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
Experience with data risk management or data control testing
Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239)
Certifications, such as Informatica, CDMP, or DCAM
Background in consulting or large-scale data transformation programs
Key Competencies:
Strategic and analytical thinking
Strong governance and control mindset
Excellent stakeholder and relationship management
Ability to drive organizational change and embed governance culture
Attention to detail with a pragmatic approach
Why Join Paradigm
At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm.
Learn more at ********************
Policy Disclosure:
Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
Data Engineer (AWS Redshift, BI, Python, ETL)
Manhattan Beach, CA jobs
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Senior Data Engineer
Chicago, IL jobs
requires visa independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Design, develop, and maintain scalable ETL pipelines using AWSGlue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands-on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Data Engineer
Jersey City, NJ jobs
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Data Engineer AI Systems
Saint Louis, MO jobs
Job Title: Data Engineer - AI Systems
6 Months
St. Louis, MO Day 1 onsite role
Data Engineer - AI Systems (Databricks)
Primary Skills: Data Engineer, Databricks, Python, PySpark, AI/ML
We'rebuilding intelligent, Databricks-powered AI systems that structure and activate information from diverse enterprise sources (Confluence, OneDrive, PDFs, andmore). As a Data Engineer, you'll design and optimize the data pipelinesthat transform raw and unstructured content into clean, AI-ready datasets formachine learning and generative AI agents.
You'llcollaborate with a cross-functional team of Machine Learning Engineers,Software Developers, and domain experts to create high-quality data foundationsthat power Databricks-native AI agents and retrieval systems.
KeyResponsibilities
Develop Scalable Pipelines: Design, build, and maintain high-performance ETL and ELT workflows using Databricks, PySpark, and Delta Lake.
Data Integration: Build APIs and connectors to ingest data from collaboration platforms such as Confluence, OneDrive, and other enterprise systems.
Unstructured Data Handling: Implement extraction and transformation pipelines for text, PDFs, and scanned documents using Databricks OCR and related tools.
Data Modeling: Design Delta Lake and Unity Catalog data models for both structured and vectorized (embedding-based) data stores.
Data Quality & Observability: Apply validation, version control, and quality checks to ensure pipeline reliability and data accuracy.
Collaboration: Work closely with ML Engineers to prepare datasets for LLM fine-tuning and vector database creation, and with Software Engineers to deliver end-to-end data services.
Performance & Automation: Optimize workflows for scale and automation, leveraging Databricks Jobs, Workflows, and CI/CD best practices.
What YouBring
Experience with data engineering, ETL development, or data pipeline automation.
Proficiency in Python, SQL, and PySpark.
Hands-on experience with Databricks, Spark, and Delta Lake.
Familiarity with data APIs, JSON, and unstructured data processing (OCR, text extraction).
Understanding of data versioning, schema evolution, and data lineage concepts.
Interest in AI/ML data pipelines, vector databases, and intelligent data systems.
BonusSkills
Experience with vector databases (e.g., Pinecone, Chroma, FAISS) or Databricks' Vector Search.
Exposure to LLM-based architectures, LangChain, or Databricks Mosaic AI.
Knowledge of data governance frameworks, Unity Catalog, or access control best practices.
Familiarity with REST API development or data synchronization services (e.g., Airbyte, Fivetran, custom connectors
Data Engineer w/ Python & SQL
Alpharetta, GA jobs
We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows.
MUST HAVES
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have strong Python, SQL & GCP skills
Responsibilities
Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer.
Develop and tune BigQuery models, queries, and ingestion processes.
Implement IaC (Terraform), CI/CD, monitoring, and data quality checks.
Ensure data governance, security, and reliable pipeline operations.
Collaborate with data science teams and support Vertex AI-based ML workflows.
Must-Have
Must have strong Python, SQL & GCP skills
3-5+ years of data engineering experience.
Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub).
Solid ETL/ELT and data modeling experience.
Nice-to-Have
GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
Data Engineer
Alpharetta, GA jobs
5 days onsite in Alpharetta, GA
Skills required:
Python
Data Pipeline
Data Analysis
Data Modeling
Must have solid Cloud experience
AI/ML
Strong problem-solving skills
Strong Communication skill
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
Data Engineer
Austin, TX jobs
We are seeking a Data Engineer to join a dynamic Agile team and support the build and enhancement of a large-scale data integration hub. This role requires hands-on experience in data acquisition, ETL automation, SQL development, and performance analytics.
What You'll Do
✔ Lead technical work within Agile development teams
✔ Automate ETL processes using Informatica Power Center / IICS
✔ Develop complex Oracle/Snowflake SQL scripts & views
✔ Integrate data from multiple sources (Oracle, SQL Server, Excel, Access, PDF)
✔ Support CI/CD and deployment processes
✔ Produce technical documentation, diagrams & mockups
✔ Collaborate with architects, engineers & business stakeholders
✔ Participate in Sprint ceremonies & requirements sessions
✔ Ensure data quality, validation & accuracy
Must Have Experience
✅ 8+ years:
Informatica Power Center / IICS
ETL workflow development
SQL development (Oracle/Snowflake)
Data warehousing & analytics
Technical documentation (Visio/Erwin, MS Office, MS Project)
Azure Data Engineer Sr
Irving, TX jobs
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Sr. Cloud Data Engineer
Malvern, PA jobs
Job Title: Sr. Cloud Data Engineer
Duration: 12 months+ Contract
Contract Description:
Responsibilities:
Maintain and optimize AWS-based data pipelines to ensure timely and reliable data delivery.
Develop and troubleshoot workflows using AWS Glue, PySpark, Step Functions, and DynamoDB.
Collaborate on code management and CI/CD processes using Bitbucket, GitHub, and Bamboo.
Participate in code reviews and repository management to uphold coding standards.
Provide technical guidance and mentorship to junior engineers and assist in team coordination.
Qualifications:
9-10 years of experience in data engineering with strong hands-on AWS expertise.
Proficient in AWS Glue, PySpark, Step Functions, and DynamoDB.
Skilled in managing code repositories and CI/CD pipelines (Bitbucket, GitHub, Bamboo).
Experience in team coordination or mentoring roles.
Familiarity with Wealth Asset Management, especially personal portfolio performance, is a plus
Sr. Data Engineer (SQL+Python+AWS)
Saint Petersburg, FL jobs
looking for a Sr. Data Engineer (SQL+Python+AWS) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = Hybrid at St. Petersburg, FL 33716 with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders.
Notes from the Hiring Manager:
• Setting up Python environments and data structures to support the Data Science/ML team.
• No prior Data Science or Machine Learning experience required.
• Role involves building new data pipelines and managing file-loading connections.
• Strong SQL skills are essential.
• Contract-to-hire position.
• Hybrid role based in St. Pete, FL (33716) only.
Duties:
This role is building and maintaining data pipelines that connect Oracle-based source systems to AWS cloud environments, to provide well-structured data for analysis and machine learning in AWS SageMaker.
It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics.
• Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.).
• Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for SageMaker-based machine learning workloads.
• Implement and manage data ingestion frameworks, including batch and streaming pipelines.
• Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow.
• Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency.
• Optimize data processes for performance and cost efficiency.
• Implement data quality checks, validation, and governance standards.
• Work with DevOps and security teams to comply with RJ standards.
Skills:
Required:
• Strong proficiency with SQL and hands-on experience working with Oracle databases.
• Experience designing and implementing ETL/ELT pipelines and data workflows.
• Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM.
• Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.).
• Solid understanding of data modeling, relational databases, and schema design.
• Familiarity with version control, CI/CD, and automation practices.
• Ability to collaborate with data scientists to align data structures with model and analytics requirements
Preferred:
• Experience integrating data for use in AWS SageMaker or other ML platforms.
• Exposure to MLOps or ML pipeline orchestration.
• Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation).
• Knowledge of data warehouse design patterns and best practices.
• Experience with data orchestration tools (e.g., Apache Airflow, Step Functions).
• Working knowledge of Java is a plus.
Education:
B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
Senior Data Engineer
McLean, VA jobs
The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing.
Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role.
Experience in Angular and DevOps are nice to haves for this role.
Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify.
Responsibilities:
Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project.
New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest
Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
Current System: Informatica, SAS, AutoSys, DB2
Write automated tests using Behave/cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify.
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis
Knowledge in Agile methodologies and technical documentation.
Data Engineer
Bloomington, MN jobs
Are you an experienced Data Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Engineer to work at their company in Bloomington, MN.
Primary Responsibilities/Accountabilities:
Develop and maintain scalable ETL/ELT pipelines using Databricks and Airflow.
Build and optimize Python-based data workflows and SQL queries for large datasets.
Ensure data quality, reliability, and high performance across pipelines.
Collaborate with cross-functional teams to support analytics and reporting requirements.
Monitor, troubleshoot, and improve production data workflows.
Qualifications:
Strong hands-on experience with Databricks, Python, SQL, and Apache Airflow.
6-10+ years of experience in Data Engineering.
Experience with cloud platforms (Azure/AWS/GCP) and big data ecosystems.
Solid understanding of data warehousing, data modelling, and distributed data processing.
Senior Snowflake Data Engineer
Santa Clara, CA jobs
About the job
Why Zensar?
We're a bunch of hardworking, fun-loving, people-oriented technology enthusiasts. We love what we do, and we're passionate about helping our clients thrive in an increasingly complex digital world. Zensar is an organization focused on building relationships, with our clients and with each other-and happiness is at the core of everything we do. In fact, we're so into happiness that we've created a Global Happiness Council, and we send out a Happiness Survey to our employees each year. We've learned that employee happiness requires more than a competitive paycheck, and our employee value proposition-grow, own, achieve, learn (GOAL)-lays out the core opportunities we seek to foster for every employee. Teamwork and collaboration are critical to Zensar's mission and success, and our teams work on a diverse and challenging mix of technologies across a broad industry spectrum. These industries include banking and financial services, high-tech and manufacturing, healthcare, insurance, retail, and consumer services. Our employees enjoy flexible work arrangements and a competitive benefits package, including medical, dental, vision, 401(k), among other benefits. If you are looking for a place to have an immediate impact, to grow and contribute, where we work hard, play hard, and support each other, consider joining team Zensar!
Zensar is seeking an Senior Snowflake Data Engineer -Santa Clara, CA-Work from office all 5 days-This is open for Full time with excellent benefits and growth opportunities and contract role as well.
Job Description:
Key Requirements:
Strong hands-on experience in data engineering using Snowflake and Databricks, with proven ability to build and optimize large-scale data pipelines.
Deep understanding of data architecture principles, including ingestion, transformation, storage, and access control.
Solid experience in system design and solution architecture, focusing on scalability, reliability, and maintainability.
Expertise in ETL/ELT pipeline design, including data extraction, transformation, validation, and load processes.
In-depth knowledge of data modeling techniques (dimensional modeling, star, and snowflake schemas).
Skilled in optimizing compute and storage costs across Snowflake and Databricks environments.
Strong proficiency in administration, including database design, schema management, user roles, permissions, and access control policies.
Hands-on experience implementing data lineage, quality, and monitoring frameworks.
Advanced proficiency in SQL for data processing, transformation, and automation.
Experience with reporting and visualization tools such as Power BI and Sigma Computing.
Excellent communication and collaboration skills, with the ability to work independently and drive technical initiatives.
Zensar believes that diversity of backgrounds, thought, experience, and expertise fosters the robust exchange of ideas that enables the highest quality collaboration and work product. Zensar is an equal opportunity employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Zensar is committed to providing veteran employment opportunities to our service men and women. Zensar is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.
Zensar does not facilitate/sponsor any work authorization for this position.
Candidates who are currently employed by a client or vendor of Zensar may be ineligible for consideration.
Zensar values your privacy. We'll use your data in accordance with our privacy statement located at: *********************************
Python Data Engineer- THADC5693417
Houston, TX jobs
Must Haves:
Strong proficiency in Python; 5+ years' experience.
Expertise in Fast API and microservices architecture and coding
Linking python based apps with sql and nosql db's
Deployments on docker, Kubernetes and monitoring tools
Experience with Automated testing and test-driven development
Git source control, git actions, ci/cd , VS code and copilot
Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases
Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on
Ability to multi task and simultaneously work on multiple projects.
NO CLOUD - they are on prem
Day to Day:
Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.