Post job

Game Engineer jobs at ZeniMax Media

- 2267 jobs
  • Lead Gameplay Features Programmer

    Zenimax Media, Inc. 4.5company rating

    Game engineer job at ZeniMax Media

    Id Software, part of the ZeniMax Media Inc. family of companies, is seeking an experienced Lead Gameplay Features Programmer to join our Dallas team in the development of an unannounced video game project. id Software is looking for an experienced Lead Gameplay Features Programmer to join our team to work on some of the industries most celebrated and popular AAA titles. You will work closely with programmers, game designers, and artists to implement and improve fun game features for rewarding customer experiences. Responsibilities * Collaborate on the implementation of new gameplay features: player and characters' behaviors, combat and powers mechanics, user interface, etc.. * Work closely with designers, artists, and other programmers to iterate on gameplay features and ensure a great player experience. * Develop and own full aspect of the game experience. * Collaborate with other members of the programmers' team to build sustainable and maintainable technologies and optimized code on all platforms * Write clear, maintainable, portable C++ code * Capable of quick iteration and feedback loops while working with the team * Ability to review and critique code in a constructive manner Qualifications * 6+ years in the games industry or demonstrable work in a first-person shooter game * 2+ years in the game industry leading a team of direct reports * Strong knowledge of C/C++ * Excellent math skills * Solid architecture and design ability * Ability to fearlessly jump into large, existing code bases * Strong communication and organizational skills * Ability to contribute innovative and original ideas towards all aspects of game production and development * Bachelor's degree in Computer Science or related field, or equivalent experience. Preferred Skills * Experience with any version of id Tech * Console experience * B.S. in computer science or equivalent study in related fields Salary Range Lead Gameplay Features Programmer - The typical base pay range for this position at the start of employment is expected to be between $125,000 - $240,000 per year. ZeniMax has different base pay ranges for different work locations within the United States, which allows us to pay employees competitively and consistently in different geographic markets. The range above reflects the potential base pay across the U.S. for this role; the applicable base pay range will depend on what ultimately is determined to be the candidate's primary work location. Individual base pay depends on various factors, in addition to primary work location, such as complexity and responsibility of role, job duties/requirements, and relevant experience and skills. Base pay ranges are reviewed and typically updated each year. Offers are made within the base pay range applicable at the time. At ZeniMax certain roles are eligible for additional rewards, such as merit increases and discretionary bonuses. These awards are allocated based on individual performance and are not guaranteed. Benefits/perks listed here may vary depending on the nature of employment with ZeniMax and the country work location. U.S.-based employees have access to healthcare benefits, a 401(k) plan and company match, short-term and long-term disability coverage, basic life insurance, wellbeing benefits, paid vacation time, paid sick and mental health time, and several paid holidays, among others. Applicant Privacy Notice ZeniMax Media California Applicant Privacy Notice E-Verification Notice E-Verify_Participation_Poster IER_Right_to_Work_Poster
    $67k-103k yearly est. Auto-Apply 60d+ ago
  • Senior CNO Developer

    Mantech 4.5company rating

    Annapolis, MD jobs

    MANTECH seeks a motivated, career and customer-oriented Senior CNO Developer to join our team in Annapolis Junction, Maryland. We're looking for a Senior Capability Developer to join our elite team. In this role, you'll apply your deep technical expertise to analyze, reverse-engineer, and develop mission-critical capabilities that directly support national security objectives. You will be a key player in a fast-paced environment, tackling unique challenges at the intersection of hardware, software, and embedded systems. Responsibilities include but are not limited to: Develop custom software tools and applications using Python, C, and Assembly, focusing on embedded and resource-constrained systems. Conduct rigorous code reviews to ensure the quality, security, and performance of developed software. Reverse engineer complex hardware and software systems to understand their inner workings and identify potential vulnerabilities. Perform in-depth vulnerability research to discover and analyze weaknesses in a variety of targets. Collaborate with a team of skilled engineers to design and implement innovative solutions to challenging technical problems. Minimum Qualifications: Bachelor's degree and 12 years of experience; or, a high school diploma with 16 years of experience; or, an Associate's degree with 14 years of experience. A Master's degree may substitute for 2 years of experience, and a PhD may substitute for 4 years of experience. Must have 7 years of position-relevant work experience Proficiency in programming and application development. Strong scripting skills, particularly in Python, C, and Assembly. Deep expertise in managing, configuring, and troubleshooting Linux. Experience in embedded systems. Experience in reverse engineering and vulnerability research of hardware and software. Experience in code review. Preferred Qualifications: Experience in CNO (Computer Network Operations) Development. Experience in virtualization. Knowledge of IoT (Internet of Things) devices. Experience with Linux Kernel development and sockets. Knowledge of integrating security tools into the CI/CD (Continuous Integration/Continuous Delivery) pipeline. Networking skills. Clearance Requirements: Must have a current/active Top Secret/SCI clearance. Physical Requirements: The person in this position must be able to remain in a stationary position 50% of the time. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations
    $85k-109k yearly est. 2d ago
  • Senior Data Engineer

    EXL 4.5company rating

    Dallas, TX jobs

    Role: Senior Data Engineer(Data Quality Framework Team) Location: Hybrid(3 days onsite, 2 days' work from Home) Pittsburgh/Cleveland/Dallas/Birmingham, AL/Phoenix Duration: Full Time Experience: 7 - 10 Years Design and build scalable data pipelines using PySpark, SQL, Hadoop. Develop and implement data quality rules, validation checks, and monitoring dashboards. Collaborate with data architects, analysts, and QE engineers to ensure end-to-end data integrity. Establish coding standards, reusable components, and version control practices for data engineering workflows. Optimize performance of ETL/ELT processes and troubleshoot data issues in production environments. Support regulatory compliance and data governance by integrating data lineage, metadata, and audit capabilities.
    $70k-96k yearly est. 3d ago
  • Data Engineer

    EXL 4.5company rating

    Dallas, TX jobs

    Data Engineer Location: Location: Hybrid (3 days onsite, 2 days' work from Home) Pittsburgh/ Cleveland/ Dallas/Phoenix Experience: 8+ Years Education: Bachelor's degree Key Responsibilities: Big Data Platform Operations Design, manage, and optimize HDFS directories, tables, and partitioning strategies. Implement and enforce data retention and lifecycle policies across large datasets. Administer Hive and Impala environments, ensuring high availability, performance tuning, and security compliance. ETL Development & Data Engineering Develop scalable ETL pipelines using PySpark, Hive, and Python. Build reusable frameworks for data ingestion, transformation, and aggregation. Optimize job performance through query tuning, resource management, and parallelization. DevOps & Environment Management Maintain and promote code across DEV, QA, UAT, and PROD environments. Develop and support CI/CD pipelines using Jenkins and uDeploy for automated deployments. Perform environment upgrades, patching, and dependency management aligned with release schedules. Linux & Infrastructure Operations Execute Linux administration tasks including performance tuning, disk management, and scripting (Bash/Python). Troubleshoot cluster-level issues including node failures, job errors, and distributed system anomalies. Change & Incident Management Drive incident resolution and change execution using ServiceNow workflows. Conduct root cause analysis (RCA) for critical issues and implement preventive solutions. Ensure compliance with ITIL processes for change, incident, and problem management. Collaboration & Technical Leadership Partner with data engineers, developers, DevOps teams, and business analysts to ensure operational excellence. Mentor junior engineers and contribute to technical leadership across the Big Data ecosystem. Document operational procedures, troubleshooting guides, and architectural decisions for internal knowledge sharing. Required Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. 8+ years of experience in Big Data engineering and DevOps practices. Advanced proficiency in HDFS, Hive, Impala, PySpark, Python, and Linux. Proven experience with CI/CD tools such as Jenkins and uDeploy. Strong understanding of ETL development, orchestration, and performance optimization. Experience with ServiceNow for incident/change/problem management. Excellent analytical, troubleshooting, and communication skills. Nice to Have: Exposure to cloud-based Big Data platforms (AWS EMR). Familiarity with containerization (Docker, Kubernetes) and infrastructure automation tools (Ansible, Terraform).
    $70k-96k yearly est. 4d ago
  • Data Engineer

    Bcforward 4.7company rating

    Houston, TX jobs

    *Presently we are unable to sponsor and request applicants to apply who are authorized to work without sponsorship* Data Engineer Duration: Contract to hire Looking for someone with Proficiency in languages Scala or Python, Platform Databricks or AWS framework (EMR Spark), and strong SQL skills & Terraform development. Position Description: 5-7 + Years of experience in Data Engineering Must Have - Proficiency in languages Scala or Python Platform Databricks or AWS framework (EMR Spark), Strong SQL skills & Terraform development.
    $76k-103k yearly est. 5d ago
  • Senior Data Engineer

    EXL 4.5company rating

    Dallas, TX jobs

    EXL is a trusted digital partner dedicated to fostering collaboration and tailoring solutions that align with our clients' unique needs, cultures, and goals. Specializing in analytics, digital interventions, and operations management, we help organizations optimize their value chains and make smarter data-driven business decisions. Through our expertise in transformation, data science, and change management, our focus is on enhancing efficiency, customer relationships, and revenue growth. At EXL, our goal is to deliver outcomes that empower businesses to achieve sustainable competitive advantage at scale. Role Description This is a full-time, on-site role for a Senior Data Engineer based in Dallas, TX. The Senior Data Engineer will be responsible for designing, implementing, and maintaining robust data engineering solutions. Day-to-day tasks include developing data models, building efficient ETL pipelines, designing and maintaining data warehousing systems, and supporting data analytics initiatives. A key aspect of this role is managing the deployment and operationalization of data solutions, specifically utilizing container orchestration technologies like Kubernetes (k8s) to ensure scalability and reliability. The role will require close collaboration with cross-functional teams to ensure data solutions align with business needs and drive effective decision-making. Qualifications A minimum of 10+ years of experience in a hands-on data engineering or related technical role. Expertise in Data Engineering and Data Modeling. Proficiency in building and managing Extract, Transform, Load (ETL) processes. Significant experience in developing and optimizing Data Warehousing systems. Proven experience with deployment strategies and tools. Strong practical experience with Kubernetes (k8s) for deploying and managing data pipelines and services in a production environment. Strong skills in Data Analytics and related methodologies. Proficiency with programming languages, such as Python, SQL, or Scala. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Excellent analytical and problem-solving abilities. Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field; a Master's degree is a plus.
    $70k-96k yearly est. 1d ago
  • Data Engineer

    IDR, Inc. 4.3company rating

    Coppell, TX jobs

    IDR is seeking a Data Engineer to join one of our top clients for an opportunity in Coppell, TX. This role involves designing, building, and maintaining enterprise-grade data architectures, with a focus on cloud-based data engineering, analytics, and machine learning applications. The company operates within the technology and data services industry, providing innovative solutions to large-scale clients. Position Overview for the Data Engineer: Develop and maintain scalable data pipelines utilizing Databricks and Azure environments Design data models and optimize ETL/ELT processes for large datasets Collaborate with cross-functional teams to implement data solutions supporting analytics, BI, and ML projects Ensure data quality, availability, and performance across enterprise systems Automate workflows and implement CI/CD pipelines to improve data deployment processes Requirements for the Data Engineer: 8-10 years of experience on modern data platforms with a strong background in cloud-based data engineering Strong expertise in Databricks (PySpark/Scala, Delta Lake, Unity Catalog) Hands-on experience with Azure (AWS/GCP also acceptable IF Super strong in Databricks) Advanced SQL skills and strong experience with data modeling, ETL/ELT development and data orchestration Experience with CI/CD (Azure DevOps, GitHub Actions, Terraform, etc.) What's in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization. Why IDR? 25+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success. Medical, Dental, Vision, and Life Insurance ClearlyRated's Best of Staffing Client and Talent Award winner 12 years in a row.
    $75k-103k yearly est. 1d ago
  • Senior Data Engineer

    Odyssey Information Services 4.5company rating

    Houston, TX jobs

    We are seeking an experienced Data Engineer (5+ years) to join our Big Data & Advanced Analytics team. This role partners closely with Data Science and key business units to solve real-world midstream oil and gas challenges using machine learning, data engineering, and advanced analytics. The ideal candidate brings strong technical expertise and thought leadership to help mature and scale the organization's data engineering practice. Must-Have Skills Python (Pandas, NumPy, PyTest, Scikit-Learn) SQL Apache Airflow Kubernetes CI/CD Git Test-Driven Development (TDD) API development Working knowledge of Machine Learning concepts Key Responsibilities Build, test, and maintain scalable data pipeline architectures Work independently on analytics and data engineering projects across multiple business functions Automate manual data flows to improve reliability, speed, and reusability Develop data-intensive applications and APIs Design and implement algorithms that convert raw data into actionable insights Deploy and operationalize mathematical and machine learning models Support data analysts and data scientists by enabling data processing automation and deployment workflows Implement and maintain data quality checks to ensure accuracy, completeness, and consistenc
    $76k-105k yearly est. 5d ago
  • Snowflake Data Engineering with AWS, Python and PySpark

    Infovision Inc. 4.4company rating

    Frisco, TX jobs

    Job Title: Snowflake Data Engineering with AWS, Python and PySpark Duration: 12 months Required Skills & Experience: 10+ years of experience in data engineering and data integration roles. Experts working with snowflake ecosystem integrated with AWS services & PySpark. 8+ years of Core Data engineering skills - Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming. 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works. Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages. Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Snowflake SQL and Stored Procedures (JavaScript or Python-based). Proficient in Python for data ingestion, transformation, and automation. Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema). Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar). Proficiency in SQL and performance tuning. Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices. Strong communication skills and ability to collaborate in agile teams.
    $76k-99k yearly est. 2d ago
  • Azure Data Engineer Sr

    Resolve Tech Solutions 4.4company rating

    Irving, TX jobs

    Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling. Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,). Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies. Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI). Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
    $76k-100k yearly est. 4d ago
  • Palantir Data Engineer

    Infovision Inc. 4.4company rating

    Dallas, TX jobs

    We are seeking a skilled Palantir Data Engineer to join our data and AI team in Dallas, TX. In this role, you will design and deploy scalable data pipelines, ontologies, and machine learning models using Palantir Foundry and other modern data platforms. You will work closely with stakeholders to integrate data, optimize workflows, and enable AI-powered decisions across critical functions such as supply chain, defense, and operations. Responsibilities Build and maintain data pipelines and workflows in Palantir Foundry, integrating structured and unstructured data. Develop and manage ontology models and object logic within Foundry to align with business use cases. Design, train, and deploy ML models for classification, optimization, and forecasting use cases. Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries. Support model deployment, monitoring, and ML Ops using tools like MLflow and Databricks. Create dashboards and data applications using Slate or Streamlit to enable operational decision-making. Implement generative AI use cases using large language models (GPT-4, Claude, Cohere) and Retrieval-Augmented Generation (RAG). Collaborate with data scientists, engineers, and product owners to deliver end-to-end data solutions. Work directly with clients to define data requirements and guide platform enablement. Support agile project delivery and provide ongoing platform optimization post-deployment. Qualifications 5+ years of experience in data engineering, machine learning, or AI deployment. Proficiency with Palantir Foundry (pipelines, code workbooks, ontology modeling). Strong programming skills in Python, Scala, or Java; experience with Spark and cloud platforms. Hands-on experience with Unix/Linux environments and big data tools. Familiarity with ML Ops, model retraining, and monitoring best practices. Excellent communication and client engagement skills. Experience working in logistics, defense, energy, or public sector industries is a plus. Education Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. A Master's degree is preferred.
    $76k-99k yearly est. 4d ago
  • Data Engineer(python, Pyspark, data bricks)

    Anblicks 4.5company rating

    Dallas, TX jobs

    Job Title: Data Engineer(python, Pyspark, data bricks) Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency. Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX) Key Responsibilities Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments) Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.) Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics Ensure data quality, consistency, security, and lineage across all stages of data processing Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery) Document data flows, logic, and transformation rules Troubleshoot performance and quality issues in batch and real-time pipelines Support compliance-related reporting (e.g., HMDA, CFPB) Required Qualifications 6+ years of experience in data engineering or data development Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.) Strong hands-on skills in Python for scripting, data wrangling, and automation Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data Experience working with mortgage banking data sets and domain knowledge is highly preferred Strong understanding of data modeling (dimensional, normalized, star schema) Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc) Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
    $75k-102k yearly est. 2d ago
  • Python Software Engineer w/ .Net & C# exp - HYBRID - Westlake, TX

    Access Global Group 4.3company rating

    Dallas, TX jobs

    Access Global Group is seeking a skilled Python Software Engineer to join our delivery team. HYBRID - Westlake, TX - every other week in office EMPLOYMENT TYPE: Applicants must be authorized to work for ANY employer in the U.S. We are unable to sponsor or take over sponsorship of an employment visa at this time. NOTE: Not open to third-party/C2C agency candidates INTERESTED: Navigate to ********************** Review the full job description Submit your application Our recruitment team will review viable applicants and reach out directly to discuss next steps with those whose experience aligns with the role. ABOUT AGG Access Global Group is a global Technology Services and Technology Consulting company based in the US, Canada, and India. AGG offers a comprehensive array of business, technology, and cloud services, as well as staff augmentation. Access Global Group is committed to its communities and to providing employees with a solid work-life balance and opportunities to grow professionally. The person in this role needs to embody the Access Global Group values of quality, collaboration, empowerment, compassion, transparency, being genuine, agile, and dynamic. We want someone who believes in our mission. ROLE DESCRIPTION Access Global Group is adding a Python Software Engineer to work on the Performance Automation Suite, including building out and completing the existing framework and developing additional integration automation tests. The role includes designing and developing Python scripts, managing dependencies, integrating with CI/CD tools like Jenkins and GitHub API, and building .NET C# console and UI applications to support internal automation and performance initiatives. RESPONSIBILITIES Design and develop Python scripts for performance automation across Windows and mac OS platforms Build and maintain desktop automation workflows using PyAutoGUI, PyWinAuto, and ATmacOS, including refactoring existing automation Develop and support .NET C# console applications focused on performance and scalability Integrate automation workflows with CI/CD and test management tools such as Jenkins, GitHub API, and Xray API Collaborate with QA, DevOps, and development teams to ensure seamless automation integration and performance validation Other duties as requested by leadership. REQUIREMENTS/QUALIFICATIONS Strong proficiency in Python scripting, including pip packaging and dependency management Experience with cross-platform automation and desktop automation tools (PyAutoGUI, PyWinAuto, ATmacOS) Working knowledge of .NET C#, especially for console applications; UI development experience is a plus Familiarity with CI/CD tools and APIs (Jenkins, GitHub, Xray) and solid understanding of OOP design principles Excellent problem-solving skills and ability to work independently in a fast-paced environment OTHER Must have no other full-time commitments, ready to engage in exciting technical consulting projects with our diverse portfolio of clients. BENEFITS For W2 employees, AGG offers the opportunity for growth and advancement, as well as a competitive base salary, medical benefits & 401k. ABOUT ACCESS GLOBAL GROUP (*************** Access Global Group is a team of experts in Salesforce Consulting, Support, and Managed Services. Living up to our name, we are a truly global company with offices throughout the United States, Canada, and India and successful projects throughout 37 US states and 15 countries. Access Global Group is a fully remote company, which means we aren't limited to hiring within the confines of a single district or region. We can add talented individuals to our team based on experience, certifications, and skills from across the globe. This gives our clients access to the most exceptional team overall, not just in their area. Since solutions come in all shapes and sizes, we believe the best team should be just as unique. The Access Global Group team is comprised of individuals with a wide variety of languages, backgrounds, stories, experiences, and expertise. This makes it hard to find a problem we haven't encountered before and certifies that there is no limit to what can be achieved with Access Global Group. EEO/ADA POLICY AGG is an equal opportunity, affirmative action employer providing equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, national origin, protected veteran status, disability status, or any other legally protected basis, in accordance with applicable law. ADA Specifications: Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions of this position. Requires the ability to speak, hear, see, and use a computer and other office-related equipment.
    $67k-90k yearly est. 3d ago
  • Senior Python Data engineer

    Synechron 4.4company rating

    Irving, TX jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking a Senior Python Data Engineer with deep expertise in building scalable, production-grade data pipelines and models, particularly within the financial domain. The ideal candidate will have advanced proficiency in modern Python frameworks like Fast API and Pydantic, strong data manipulation skills, and extensive experience with relational and NoSQL databases. Knowledge of graph databases, LLMs, and MLOps practices is highly desirable. Join us to innovate at the intersection of data engineering, AI, and financial risk management, delivering impactful, high-performance solutions. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Irving, TX is $120k - $130k/year & benefits (see below). The Role Responsibilities: Design, develop, and deploy robust, scalable data pipelines to process large volumes of structured and unstructured financial data. Implement AI-driven solutions, leveraging LLMs and Generative AI to enhance data quality, enrichment, and analysis. Engineer and productionize predictive and prescriptive models in collaboration with quantitative and business teams to deliver measurable value. Analyze complex financial datasets, with a focus on credit risk, to uncover patterns, insights, and innovative solutions. Serve as a technical partner to stakeholders, translating business requirements into high-performance, resilient data systems. Stay abreast of latest advancements in data engineering, AI, and ML to foster continuous innovation and improvement within the team. Requirements: Expert-Level Python: Deep, hands-on proficiency with modern Python (3.11+). Modern Frameworks: Proven experience building high-performance, production-ready services and data models using the latest Python frameworks, including FastAPI and Pydantic. Data Tooling: Strong command of core data manipulation and analysis libraries (e.g., Pandas, NumPy, Polars). Database Proficiency: Advanced SQL skills and extensive experience working with large-scale relational databases (e.g., Sybase IQ, PostgreSQL, Oracle). Educational Foundation: Bachelor's degree in computer science, Engineering, or a related quantitative field (or equivalent practical experience). Problem-Solving Mindset: A proven ability to dissect complex, often ambiguous problems and engineer elegant, effective solutions. Preferred, but not required: Graph Technology: Practical experience with graph databases, specifically Neo4j Enterprise, and graph data modeling concepts. Diverse Database Experience: Proficiency with various database systems, including relational databases like PostgreSQL and NoSQL databases like MongoDB. GenAI & LLM Experience: Hands-on experience with modern AI frameworks like Lang Chain, Llama Index, or Hugging Face Transformers. Big Data Expertise: Familiarity with distributed computing frameworks like Apache Spark (PySpark) or Dask. Financial Domain Knowledge: Prior experience in the financial services industry, especially within risk management, is a significant plus. MLOps: Understanding of MLOps principles and tools for model versioning, deployment, and monitoring (e.g., MLflow, Kubeflow). We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $120k-130k yearly 3d ago
  • GCP Data Engineer

    Infosys 4.4company rating

    Richardson, TX jobs

    Infosys is seeking a Google Cloud (GCP) data engineer with experience in Github and python. In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. You will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Required Qualifications: Candidate must be located within commuting distance of Richardson, TX or be willing to relocate to the area. This position may require travel in the US Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time At least 4 years of Information Technology experience. Experience working with technologies like - GCP with data engineering - data flow / air flow, pub sub/ kafta, data proc/Hadoop, Big Query. ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka Strong knowledge on Python Program development to build reusable frameworks, enhance existing frameworks. Application build experience with core GCP Services like Dataproc, GKE, Composer, Deep understanding GCP IAM & Github. Must have done IAM set up Knowledge on CICD pipeline using Terraform in Git. Preferred Qualifications: Good knowledge on Google Big Query, using advance SQL programing techniques to build Big Query Data sets in Ingestion and Transformation layer. Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data Knowledge on Airflow Dag creation, execution, and monitoring. Good understanding of Agile software development frameworks Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams. Experience and desire to work in a global delivery environment.
    $72k-91k yearly est. 2d ago
  • Python Data Engineer- THADC5693417

    Compunnel Inc. 4.4company rating

    Houston, TX jobs

    Must Haves: Strong proficiency in Python; 5+ years' experience. Expertise in Fast API and microservices architecture and coding Linking python based apps with sql and nosql db's Deployments on docker, Kubernetes and monitoring tools Experience with Automated testing and test-driven development Git source control, git actions, ci/cd , VS code and copilot Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on Ability to multi task and simultaneously work on multiple projects. NO CLOUD - they are on prem Day to Day: Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.
    $78k-101k yearly est. 3d ago
  • Junior Data Engineer

    Optomi 4.5company rating

    Dallas, TX jobs

    Junior Data Engineer - Azure Hybrid to Dallas, TX Initial 1 year contract - potential to extend or convert to full time Rate: $40-$50/hour W2 only - no C2C or 1099 Optomi, in partnership with a transportation company, is seeking a Junior Data Engineer with minimum 2 years of hands-on experience in data engineering, cloud platforms, and data integration. This role is ideal for someone who is enthusiastic about building reliable data pipelines, working with modern data tools, and contributing to scalable data solutions in a collaborative environment. List of responsibilities: Design, develop, and maintain robust and scalable data pipelines. Integrate data from various sources using tools like Azure Data Factory, SQL, and other ETL frameworks. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Monitor and optimize data workflows for performance and reliability. Ensure data quality, integrity, and security across all data processes. Participate in code reviews and contribute to best practices in data engineering. List Experience and Skills needed: Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering. Proficiency in SQL and experience with relational databases (e.g., SQL Server, PostgreSQL). Hands-on experience with Microsoft Azure, especially Azure Data Factory, Azure Storage, and Azure SQL. Familiarity with data integration techniques and ETL/ELT processes. Strong problem-solving skills and attention to detail. Knowledge of data modeling and data warehousing concepts. Familiarity with version control systems (e.g., Git). Exposure to data visualization tools like Power BI or Tableau. Understanding of CI/CD pipelines and DevOps practices in data engineering.
    $40-50 hourly 2d ago
  • Software Engineer

    Lancesoft, Inc. 4.5company rating

    Dallas, TX jobs

    Title: Elastic Engineer / Elasticsearch Specialist with Java (Spring boot, Kafka) Duration: 6+ months (Extension Possible) Exp: 10 & Above Skills must have: Pega, Pega Decision Management Design and implement Next-Best-Action (NBA) strategies using Pega CDH Configure and optimize decision strategies, engagement policies, and predictive models. Develop and maintain Pega rules, flows, and data models aligned with business requirements. Integrate Pega CDH with external systems for data ingestion and outbound communication. Collaborate with business analysts, architects, and QA teams to ensure high-quality deliverables. Perform unit testing, debugging, and performance tuning of Pega applications. Stay updated with Pega platform upgrades and best practices. 7 years of experience in Pega development, with at least 4 years in Pega CDH. Strong knowledge of Pega Decisioning Framework, NBA Designer, and Customer Profile Designer. Experience with Pega 8.x or higher versions. Familiarity with data flows, adaptive models, and predictive analytics. Experience in SQL queries Understanding of integration protocols (SOAP, REST, MQ, etc.). Pega certifications such as CPDC (Certified Pega Decisioning Consultant) or CSACSSA preferred. Excellent problem-solving and communication skills. Nice-to-Have API development and debugging. Knowledge of Agile methodologies and DevOps practices. Exposure to cloud environments (AWS, Azure, GCP).
    $69k-89k yearly est. 3d ago
  • Java Software Engineer

    Resolve Tech Solutions 4.4company rating

    Irving, TX jobs

    Role: Sr. Java Backend Developer (Java, Spring Boot, MongoDB, AWS) Should be willing to attend live coding session - onsite at Irving, TX. JD: Responsibilities Design APIs, develop shippable code, documentation, and unit test new features for digital products. Works with fellow API Developers, Team Leads, Architects to deliver features through the creation of re-usable RESTful APIs. Front end design, development and integration Collaborate with Quality, Product and Cloud Engineering teams to keep digital assets fully functional, secure, and up to date with business needs. Perform pair programming, effectively communicate ideas with the team, assist in systems integration, performance testing and product releases Implement policies, roles, data access controls, monitoring events, resolve system and data issues for continuous functioning of APIs Mentor junior developers through work product review, help with design, development tools and development best practices Qualifications Must have 6-10 years of hands-on programming experience as a Senior Engineer or a Technical Lead. Must have 3 years of RESTful APIs / Server-side development experience using microservices architecture using Spring boot Must have 3 years of experience in cloud services (preferable AWS), developing micro-services, CI/CD solution, message queue systems and background task management. Must have 3 years of experience in developing NoSQL and SQL databases, designing data models, proficient in querying data for quality, analysis, analytics and adhoc reporting Knowledge in API security frameworks, token management and user access control including OAuth, JWT, OpenAPI, etc. Experience working with API Gateway, CDNs, API Performance testing, CI/CD pipelines and monitoring tools Ability to work in an Agile / SCRUM environment Strong writing and communication skills
    $65k-85k yearly est. 3d ago
  • Senior Gameplay Programmer

    Zenimax Media, Inc. 4.5company rating

    Game engineer job at ZeniMax Media

    Come join Bethesda Game Studios, the award-winning development team behind Starfield, The Elder Scrolls and Fallout. Bethesda Game Studios strives to offer its employees a well-balanced home and work life by providing competitive salaries, a generous benefits program, and offices located in some of North America's best cities. With a goal of creating a culture as fun and diverse as our games and our players, we welcome applicants with unique skillsets, experience levels and backgrounds. If you are passionate about making a meaningful contribution to some of the most significant games in the industry we'd love to hear from you! We will consider candidates for any of our four Bethesda Game Studios office locations: Rockville, MD; Montreal, Quebec; Austin, TX; Dallas, TX. Responsibilities Your Daily Life at Bethesda Game Studios As Senior Gameplay Programmer, you will… * Collaborate on the implementation of new gameplay features: player and characters' behaviors, combat and powers mechanics, open world gameplay, etc. * Work closely with designers, artists, and other programmers to iterate on gameplay features and ensure a great player experience * Develop and own full aspect of the game experience * Collaborate with other members of the programmers' team to build sustainable and maintainable technologies and optimized code on all platforms * Guide and mentor more junior members of the team Qualifications What Makes You S.P.E.C.I.A.L. * You have 5+ years of experience in game development, having shipped at least 1 AAA title * You are proficient with C++ and object-oriented programming * You have a solid understanding of proper code architecture, performance and memory cost and practical experience with code optimization * You have development experience on game consoles (Xbox, Playstation) * You have the ability to contribute innovative and original ideas towards all aspects of game production and development * You possess strong communication and organizational skills * You bring a positive attitude and a collaborative spirit * You have a passion for making GREAT games * You have experience playing Bethesda Game Studios games Salary Range Senior Gameplay Programmer - The typical base pay range for this position at the start of employment is expected to be between $105,000 - $225,000 per year. ZeniMax has different base pay ranges for different work locations within the United States, which allows us to pay employees competitively and consistently in different geographic markets. The range above reflects the potential base pay across the U.S. for this role; the applicable base pay range will depend on what ultimately is determined to be the candidate's primary work location. Individual base pay depends on various factors, in addition to primary work location, such as complexity and responsibility of role, job duties/requirements, and relevant experience and skills. Base pay ranges are reviewed and typically updated each year. Offers are made within the base pay range applicable at the time. At ZeniMax certain roles are eligible for additional rewards, such as merit increases and discretionary bonuses. These awards are allocated based on individual performance and are not guaranteed. Benefits/perks listed here may vary depending on the nature of employment with ZeniMax and the country work location. U.S.-based employees have access to healthcare benefits, a 401(k) plan and company match, short-term and long-term disability coverage, basic life insurance, wellbeing benefits, paid vacation time, paid sick and mental health time, and several paid holidays, among others. This is position is in a union and represented by the Communication Workers of America. Applicant Privacy Notice ZeniMax Media California Applicant Privacy Notice E-Verification Notice E-Verify_Participation_Poster IER_Right_to_Work_Poster
    $67k-103k yearly est. Auto-Apply 48d ago

Learn more about ZeniMax Media jobs