About the Role
We are looking for a Data Analytics Engineer who sits at the intersection of dataengineering and analytics. In this role, you will transform raw, messy data from vehicles, APIs, and operational systems into clean, reliable datasets that are trusted and widely used-from engineering teams to executive leadership.
You will own data pipelines end to end and build dashboards that surface insights, track performance, and help teams quickly identify issues.
What You'll Do
Build and maintain ETL pipelines that ingest data from diverse internal systems into a centralized analytics warehouse
Work with unique and high-volume datasets, including vehicle telemetry, sensor-derived signals, logistics data, and system test results
Write efficient, well-structured SQL to model and prepare data for analysis and reporting
Design, build, and maintain dashboards (e.g., Grafana or similar) used to monitor system performance and operational health
Partner closely with engineering, operations, and leadership teams to understand data needs and deliver actionable datasets
Explore internal AI- and LLM-based tools to automate analysis and uncover new insights
What You'll Need
Strong hands-on experience with Python and data libraries such as pandas, Polars, or similar
Advanced SQL skills, including complex joins, window functions, and query optimization
Proven experience building and operating ETL pipelines using modern data tooling
Experience with BI and visualization tools (e.g., Grafana, Tableau, Looker)
Familiarity with workflow orchestration tools such as Airflow, Dagster, or Prefect
High-level understanding of LLMs and interest in applying them to data and analytics workflows
Strong ownership mindset and commitment to data quality and reliability
Nice to Have
Experience with ClickHouse or other analytical databases (e.g., Snowflake, BigQuery, Redshift)
Background working with vehicle, sensor, or logistics data
Prior experience in autonomous systems, robotics, or other data-intensive hardware-driven domains
$78k-106k yearly est. 1d ago
Looking for a job?
Let Zippia find it for you.
SAP Data Architect
Excelon Solutions 4.5
Data engineer job in Austin, TX
Tittle: SAP Data Architect
Mode: Fulltime
Expectations / Deliverables for the Role
Builds the SAP data foundation by defining how SAP systems store, share, and manage trusted enterprise data.
Produces reference data architectures by leveraging expert input from application, analytics, integration, platform, and security teams. These architectures form the basis for new solutions and enterprise data initiatives.
Enables analytics and AI use cases by ensuring data is consistent, governed, and discoverable.
Leverages SAP Business Data Cloud, Datasphere, MDG and related capabilities to unify data and eliminate duplicate data copies.
Defines and maintains common data model catalogs to create a shared understanding of core business data.
Evolves data governance, ownership, metadata, and lineage standards across the enterprise.
Protects core transactional systems by preventing excessive replication and extraction loads.
Technical Proficiency
Strong knowledge of SAP master and transactional data domains.
Hands-on experience with SAP MDG, Business Data Cloud, BW, Datasphere, or similar platforms.
Expertise in data modeling, metadata management, data quality, and data governance practices.
Understanding of data architectures that support analytics, AI, and regulatory requirements.
Experience integrating SAP data with non-SAP analytics and reporting platforms.
Soft Skills
Ability to align data, and engineering teams around a shared data vision, drive consensus on data standards and decisions
Strong facilitation skills to resolve data ownership and definition conflicts.
Clear communicator who can explain architecture choices, trade-offs, and cost impacts to stakeholders.
Pragmatic mindset focused on value, reuse, and simplification.
Comfortable challenging designs constructively in ARB reviews
$92k-124k yearly est. 3d ago
Staff Data Engineer
Visa 4.5
Data engineer job in Austin, TX
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose - to uplift everyone, everywhere by being the best way to pay and be paid.
Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.
Job Description
Visa Technology & Operations LLC, a Visa Inc. company, needs a Staff DataEngineer (multiple openings) in Austin, TX to:
Design, enhance, and build next generation fraud detection solutions in an agile development environment.
Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with product stakeholders.
Drive development effort end-to-end for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Responsibilities span all phases of solution development.
Collaborate with project team members (Product Managers, Architects, Analysts, Developers, Project Managers, etc.) to ensure development and implementation of new data driven business solutions.
Deliver all code commitments and ensure a complete end-to-end solution that meets and exceeds business expectations.
Assist in scoping and designing analytic data assets, implementing modelled attributes, and contributing to brainstorming sessions.
Build and maintain a robust dataengineering process to develop and implement self-serve data and tools for Visa's data scientists.
Perform other tasks on data governance, system infrastructure, analytics tool evaluation, and other cross-team functions, as needed.
Execute dataengineering projects ranging from small to large either individually or as part of a project team.
Ensure project delivery within timelines and budget requirements.
Effectively communicate status, issues, and risks in a precise and timely manner.
Position reports to the Austin, Texas office and may allow for partial telecommuting.
Qualifications
Basic Qualifications:
Bachelor's degree in Computer Science, Engineering, Data Analytics or related field, followed by 5 years of progressive, post-baccalaureate experience in the job offered or in a dataengineer-related occupation.
Alternatively, a Master's degree in Computer Science, Engineering, Data Analytics or related field and 2 years of experience in the job offered or in a dataengineer-related occupation.
Experience must include:
Creating data driven business solutions and solving data problems using the following technologies: Hadoop, Spark, NoSQL and SQL.
Building ETL/ELT data pipelines, data quality checks and data anomaly detection systems.
Building large scale data processing systems of high availability, low latency, & strong data consistency.
Programming using SQL or Python.
Agile development incorporating continuous integration and continuous delivery.
Additional Information
Worksite: Austin, TX
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.
Travel Requirements: This position does not require travel.
Mental/Physical Requirements:This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is $163,550.00 USD to $210,300.00 USD per year, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.
$163.6k-210.3k yearly 4d ago
CSV Engineer
Advantage Technical
Data engineer job in Austin, TX
Computer System Validation Engineer
Contract: 6 months
Hours: 8-5 (flexible as needed)
Rate: $50-$55/hr
Sponsorship: Not available
No C2C or agency referrals
Maintain and improve quality engineering programs, ensuring compliance with regulatory standards and company policies. Lead and support computer system validation (CSV) activities across software and IT systems.
Key Responsibilities:
Define and implement validation strategy for the V&V program.
Complete, review, and approve all CSV deliverables.
Conduct assessments including Validation Applicability, 21CFRPart11, and Software Functionality Risk Assessments.
Review test cases, executed protocols, and reports to ensure proper coverage and compliance with validation standards.
Provide guidance on incident/defect handling and risk mitigation.
Review and update SOPs, templates, and validation documentation.
Support internal and external audits.
Develop and maintain quality standards, inspection/testing procedures, and corrective actions.
Compile and report quality data; liaise with Product Engineers, Quality Program Managers, and regulatory bodies as needed.
Qualifications:
Experience in computer system validation, software V&V, and regulatory compliance.
Strong knowledge of 21CFRPart11, SOPs, and CSV documentation.
Ability to conduct risk assessments and provide independent validation reviews.
Excellent communication and training skills.
$50-55 hourly 1d ago
Data Enablement Consultant - Transformation Programs
Eclerx Services
Data engineer job in Austin, TX
Type: Fixed Term (6 months) eligible for employee benefits Department: Technology We are looking for a strategic and collaborative Data Enablement Consultant to support enterprise-wide transformation programs by identifying, defining, and enabling access to order related data. This role will work closely with cross-functional teams-including business units, technology teams, and data platform owners-to ensure the right data is available, accessible, and trusted across transformation initiatives. You will play a critical role in bridging the gap between data consumers and data producers, accelerating the delivery of transformation outcomes through well-governed and fit-for-purpose data.
Responsibilities
* Partner with transformation program teams to understand business objectives and identify order-related data needs (e.g., customer, product, finance, supplier, inventory, employee, etc.).
* Work across business units to gather requirements, map data dependencies, and prioritize data enablement initiatives.
* Collaborate with dataengineering and data governance teams to ensure relevant data sources are ingested, modeled, and made available in enterprise data platforms.
* Lead efforts to catalog, document, and communicate newly enabled datasets, ensuring alignment with data governance and metadata standards.
* Develop and maintain a backlog of data enablement workstreams linked to key transformation milestones.
* Facilitate workshops and discovery sessions with cross-functional stakeholders to uncover hidden or siloed data critical to transformation programs.
* Serve as the liaison between business users and technical teams to ensure data needs are well-understood, translated into technical requirements, and delivered appropriately.
* Monitor the usage and adoption of newly enabled data assets and address data quality or accessibility issues as needed.
* Support data literacy by promoting understanding of newly enabled data and how it can be used effectively in the context of business transformation.
Eligibility Requirements
* Bachelor's degree in Information Systems, Data Science, Business, or related field.
* 5+ years of experience in data management, data enablement, or analytics roles with cross-functional collaboration.
* Proven experience working on or supporting large-scale transformation or change programs.
* Strong understanding of data domains outside of order-related data (e.g., master data, financial data, customer data, supplier data, etc.).
* Experience working with modern data platforms and tools (e.g., Snowflake, BigQuery, Power BI, Tableau, Collibra, Alation).
* Strong stakeholder management and facilitation skills across business and technical teams.
* Knowledge of data governance, metadata management, and data cataloging practices.
* Preferred Qualifications:
* Experience in a matrixed or federated data organization.
* Familiarity with enterprise transformation methodologies or frameworks.
* Understanding of data architecture concepts and enterprise data modeling.
* Experience with Agile delivery environments and tools (e.g., Jira, Confluence).
In the US, the target base salary for this role is $150,000-$200,000. Compensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications, and geography. We expect the majority of candidates who are offered roles at our company to fall throughout the range based on these factors
What We Offer
* Competitive salary and performance bonuses
* Flexible working hours
* Career growth opportunities and ongoing training
* Inclusive, supportive company culture
How to Apply
* Click "Apply Now" to submit your resume through our career site
* Be sure to include any relevant experience that aligns with the role.
* Qualified candidates will be contacted by a member of our recruitment team for next steps
About eClerx
eClerx is a leading provider of productized services, bringing together people, technology and domain expertise to amplify business results.
The firm provides business process management, automation, and analytics services to a number of Fortune 2000 enterprises, including some of the world's leading financial services, communications, retail, fashion, media & entertainment, manufacturing, travel & leisure, and technology companies. Incorporated in 2000, eClerx is traded on both the Bombay and National Stock Exchanges of India. The firm employs more than 19,000 people across Australia, Canada, France, Germany, Switzerland, Egypt. India, Italy, Netherlands, Peru, Philippines, Singapore, Thailand, the UK, and the USA.
For more information, visit **************
You can also find us on:
****************************************
***************************************
********************************
eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. We are also committed to protecting and safeguarding your personal data. Please find our policy here
$150k-200k yearly Auto-Apply 39d ago
Data Scientist, GTM Analytics
Airtable 4.2
Data engineer job in Austin, TX
Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done.
Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Our data team's mission is to fuel Airtable's growth and operations. We are a strategic enabler, by building high-quality and customer-centric data products and solutions. We are looking for a Data Scientist to work directly with Airtable's business stakeholders. Your data product will be instrumental in accelerating the efficiency of Customer Engagement (CE) organizations including sales, CSG and revenue operations teams. This role offers the opportunity to significantly impact Airtable's strategy and go-to-market execution, providing you with a platform to deploy your data skills in a way that directly contributes to our company's growth and success. What you'll do
Champion AI Driven Data Product with Scalability: Design and implement ML models and AI solutions to enable CE team with actionable insights and recommendations. Build scalable data pipelines and automated workflows with MLOps best practices.
Support Key Business Processes: Provide strategic insights, repeatable frameworks and thought partnership independently to support key CE business processes like territory carving, annual planning, pricing optimization and performance attribution, etc..
Strategic Analysis: Drive in-depth deep-dive analysis to ensure accuracy and relevance. Influence the business stakeholders with a good story telling of the data. Tackle ambiguous problems to uncover business value with minimal oversight.
Develop Executive Dashboards: Design, build, and maintain high-quality dashboards and BI tools. Partner with Revenue Operations team to enable vast roles of CE team efficiently with the data products.
Strong Communication Skills: Effectively communicate the “so-what” of an analysis, illustrating how insights can be leveraged to drive business impact across the organization.
Who you are
Education: Bachelor degree in a quantitative discipline (Math, Statistics, Operations Research, Economics, Engineering, or CS), MS/MBA preferred.
Industry Experience: 4+ years of working experience as a data scientist / analytics engineer in high-growth B2B SaaS, preferably supporting sales, CSG or other go-to-market stakeholders.
Demonstrated business acumen with a deep understanding of Enterprise Sales strategies (sales pipeline, forecast models, sales capacity, sales segmentation, quota planning), CSG strategies (customer churn risk model, performance attribution) and Enterprise financial metrics (ACV, ARR, NDR)
Familiar with CRM platforms (i.e., Salesforce)
Technical Proficiency:
6+ years of experience working with SQL in modern data platforms, such as Databricks, Snowflake, Redshift, BigQuery
6+ years of experience working with Python or R for analytics or data science projects
6+ years of experience building business facing dashboards and data models using modern BI tools like Looker, Tableau, etc.
Proficient-level experience developing automated solutions to collect, transform, and clean data from various sources, by using tools such as dbt, Fivetran
Proficient knowledge of data science models, such as regression, classification, clustering, time series analysis, and experiment design
Hands-on experience with batch LLM pipeline is preferred
Excellent communication skills to present findings to both technical and non-technical audiences.
Passionate to thrive in a dynamic environment. That means being flexible and willing to jump in and do whatever it takes to be successful.
Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive. We welcome people of different backgrounds, experiences, abilities, and perspectives. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or any characteristic protected by applicable federal and state laws, regulations and ordinances. Learn more about your EEO rights as an applicant.
VEVRAA-Federal Contractor
If you have a medical condition, disability, or religious belief/practice which inhibits your ability to participate in any part of the application or interview process, please complete our Accommodations Request Form and let us know how we may assist you. Airtable is committed to participating in the interactive process and providing reasonable accommodations to qualified applicants.
Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience.
Our total compensation package also includes the opportunity to receive benefits, restricted stock units, and may include incentive compensation. To learn more about our comprehensive benefit offerings, please check out Life at Airtable.
For work locations in the San Francisco Bay Area, Seattle, New York City, and Los Angeles, the base salary range for this role is:$179,500-$221,500 USDFor all other work locations (including remote), the base salary range for this role is:$161,500-$199,300 USD
Please see our Privacy Notice for details regarding Airtable's collection and use of personal information relating to the application and recruitment process by clicking here.
🔒 Stay Safe from Job Scams
All official Airtable communication will come from an @airtable.com email address. We will never ask you to share sensitive information or purchase equipment during the hiring process. If in doubt, contact us at ***************. Learn more about avoiding job scams here.
$179.5k-221.5k yearly Auto-Apply 16d ago
Data Scientist
Victory 3.9
Data engineer job in Austin, TX
We are looking for a skilled Data Scientist who will help us analyze large amounts of raw information to find patterns and use them to optimize our performance. You will build data products to extract valuable business insights, analyze trends and help us make better decisions.
We expect you to be highly analytical with a knack for analysis, math and statistics, and a passion for machine-learning and research. Critical thinking and problem-solving skills are also required.
Data Scientist responsibilities are:
Research and detect valuable data sources and automate collection processes
Perform preprocessing of structured and unstructured data
Design, implement and deliver maintainable and high-quality code using best practices (e.g. Git/Github, Secrets, Configurations, Yaml/JSON)
Review large amounts of information to discover trends and patterns
Create predictive models and machine-learning algorithms
Modify and combine different models through ensemble modeling
Organize and present information using data visualization techniques
Develop and suggest solutions and strategies to business challenges
Work together with engineering and product development teams
Data Scientist requirements are:
3+ years' experience of working on Data Scientist or Data Analyst position
Significant experience in data mining, machine-learning and operations research
Experience with data modeling, design patterns, building highly scalable and secured solutions preferred
Prior experience installing data architectures on Cloud providers (e.g. AWS,GCP,Azure), using DevOps tools and automating data pipelines
Good experience using business intelligence/visualization tools (such as Tableau), data frameworks (such as Hadoop, DataFrames, RDDs, Dataclasses) and data formats (CSV, JSON, Parquet, Avro, ORC)
Advanced knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset
MA or PhD degree in Computer Science, Engineering or other relevant area; graduate degree in Data Science or other quantitative field is preferred
Must be a U.S. Citizen
$73k-101k yearly est. Auto-Apply 60d+ ago
Data Analytics Test I
2K Vegas
Data engineer job in Austin, TX
Data Test Analyst I
The Data Test Analyst I play a vital role in ensuring the accuracy and reliability of game data. Working under the guidance of senior team members, the Data Test Analyst I assist in testing, validating, and analyzing data-driven features and systems within video games.
Primary Responsibilities
Assist in conducting thorough testing and validation of game data to ensure accuracy, consistency, and integrity across various platforms and environments.
Support the development and execution of basic SQL queries to test game telemetry events and data outcomes.
Learn and comprehend telemetry design and implementation and assist in identifying how new telemetry events impact existing game data.
Collaborate with senior team members to develop and complete test plans tailored to specific data-related features and systems.
Participate in regression testing processes to identify potential data anomalies or regressions caused by code changes or updates.
Assist in analyzing game data to identify patterns, trends, and anomalies, providing actionable insights to senior team members and collaborators.
Document test cases, findings, and outcomes under the supervision of senior analysts. Give to generating reports to communicate test results effectively.
Work closely with senior analysts, game developers, and other stakeholders to understand data requirements and ensure alignment with quality standards.
Stay updated on emerging trends and guidelines in data testing and the gaming industry to enhance your skills and knowledge.
Act as a trusted advisor to senior leadership on Quality Assurance matters.
Core Competencies
Behavioral
Follows established protocols, independently resolves routine challenges, and proactively identifies and mitigates potential risks.
Provides clear, consistent updates on progress and blockers.
Communicates optimally across teams, ensuring alignment and surfacing potential risks early.
Technical
Skilled in Jira, TestRail, Confluence, and Databricks; proficient in handling standard workflows and using tools for routine tasks.
Solid understanding of SQL; able to create, complete, and manipulate data sets with proficiency, including advanced functions.
Understands and performs basic ETL functions with SQL queries; capable of solve workflows, debugging errors, and optimizing performance.
Proficient-level SQL with deep understanding of relational databases; experienced in handling sophisticated ETL failures, performance issues, and scaling.
Leadership
Provides clear, timely updates to collaborators on project status and risks.
Leads updates across teams and ensures dashboards accurately reflect project progress.
Strategic Influence & Business Acumen
Understands game development phases and can prioritize tasks within each cycle.
Proficient in streamlining processes and optimizing workflows across development stages.
Required Qualifications, Knowledge, and Job-Related Skills
2-4 years of experience in data testing, quality assurance, or a related role (internship experience is acceptable).
Strong communication skills in English, both written and verbal, to clearly document findings and provide feedback to level appropriate audiences.
Strong analytical skills, attention to detail, creative problem-solving skills, interpersonal abilities, and a sense of ownership over assigned duties.
Positive attitude along with flexibility, dependability, and excellent social skills, including the ability to collaborate effectively within a team environment.
Passion for video games and an understanding of gaming mechanics and player behaviors.
Technical Skills:
Basic proficiency in SQL and familiarity with scripting languages (e.g., Python).
Understanding of data structures, databases, and data manipulation techniques.
Exposure to data visualization tools such as Tableau or Power BI is a plus.
Physical requirements include the ability to:
Lift up to 20 lbs. unassisted or assisted occasionally.
Stand, sit, and walk for prolonged periods of time.
Move between three separate floors as needed.
This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee; other tasks and duties may be assigned or reassigned as needed.
2K is committed to providing reasonable accommodations in accordance with the Americans with Disabilities Act (ADA) and applicable state and local laws. Employment at 2K is considered at-will, except where prohibited by state legislation. Compensation and job postings may include disclosures required under state pay transparency laws.
2K is an Equal Opportunity Employer, committed to creating an inclusive work environment free from discrimination based on race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability, veteran status, or any other characteristic protected by law.
$65k-96k yearly est. Auto-Apply 3d ago
Big Data Consultant
Sonsoft 3.7
Data engineer job in Austin, TX
Sonsoft , Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. Sonsoft Inc. is growing at a steady pace specializing in the fields of Software Development, Software Consultancy and Information Technology Enabled Services.
Job Description:-
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Must have experience working on Big Data Processing Frameworks and Tools - MapReduce, YARN, Hive, Pig.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats - Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools - Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills - UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.
Preferred
Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred
Experience and desire to work in a Global delivery environment
Qualifications
Bachelor's degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education.
Atleast 5 years of Design and development experience in Big data, Java or Datawarehousing related technologies.
Atleast 3 years of hands on design and development experience on Big data related technologies - PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting.
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs.
Should be able to work in team in diverse/ multiple stakeholder environment.
Additional Information
**
U.S. citizens and those authorized to work in the U.S. are encouraged to apply
. We are unable to sponsor at this time.
Note:-
This is a FULL TIME job oppurtunity.
Only US Citizen, Green Card Holder, GC-EAD, H4-EAD, L2-EAD, TN VIsa can apply.
No OPT-EAD & H1-B for this position.
Please mention your Visa Status in your email or resume.
$76k-105k yearly est. 60d+ ago
Collibra Data Governance Consultant with mulesoft
Tekskills 4.2
Data engineer job in Austin, TX
Contract duration (in months)* 4 Months Implementation Partner: Infosys Must Have Skills (Top 3 technical skills only) * 1. Collibra Data Governance 2. Collibra Connect with Mulesoft Detailed Job Description: Collibra Data Governance with experience in Collibra connect for Mulesoft.
Desired years of experience*:
Above 5 years
Education/ Certifications (Required):
BE
Top 3 responsibilities you would expect the Subcon to shoulder and execute*:
1. Interact with business users to get the requirements
2. Configure the data governance structure in Collibra
3. Review deliverables from Offshore
Nagarjuna. G
Sr.Technical Recruiter
Phone: ************
Additional Information
All your information will be kept confidential according to EEO guidelines.
$78k-109k yearly est. 1d ago
Senior Data Insights Consultant
VMLY&R
Data engineer job in Austin, TX
Do you have strong data and analytical skills? Can you identify, build, and apply data models to support marketing insights? Would you like an exciting job with plenty of opportunity to grow? Then you might be the Senior Data Insights Consultant we are looking for!
What will your day look like?
As our new Senior Data Insights Consultant, you will join our growing Business Insights team. Here, you will provide your data skills and business knowledge to support our clients in making data-driven decisions to improve their digital communication. This entails designing and specifying data solutions and data integrations for various campaign platforms.
More specifically, your tasks will include:
* Identifying and designing value-adding insights solutions, leveraging data to optimize communication strategies.
* Interpreting business requests and clarifying data requirements.
* Assisting and advising on data models and scoping of new projects.
* Providing answers and insights to business-related questions via automated reporting solutions as well as ad-hoc data analyses.
* Analysing client prospect and customer data gleaning insights informing experience, content and performance optimizations.
* Telling stories shaped and informed by data and your analysis.
* Collaborating with highly skilled specialists including Account Managers, Architects, Developers, Creatives, Strategists, Data Scientists and Marketing Operation experts to service our clients coherently.
* Promoting a data-driven agenda in a digital marketing context.
Who are you going to work with?
You will join a team of hands-on Data Analysts, Data Scientists, Consultants, and DataEngineers who are passionate about bringing value and knowledge from data. We are all about unlocking insights from data through analytics and making that insight applicable in 1:1 data-driven communication and CRM. Your work will always be firmly anchored in data in a cross-disciplinary setting, collaborating closely with highly enthusiastic experts.
What do you bring to the table?
As a person, you are outgoing and love being part of interdisciplinary projects and solutions. You are eager to learn and quick to understand the complexity of high-tech dialogues and solutions.
Furthermore, you have the drive, enthusiasm and technical skills to take lead when facing the client in data and insights related matters. Through this, you strive to help and inspire the client to grow their business by combining data insights, performance analytics, and dataengineering. It's an advantage, if you have agency experience and marketing domain knowledge, but it's not a requirement.
In addition, you have:
* A minimum of 3-5 years of experience in a senior consultant/business liaison role related to data, BI, analytics or reporting solutions.
* Hands-on experience working with SQL, databases, ETL and reporting, data analysis through R, Python or other similar toolsets.
* Experience with report and dashboard development in Power BI/Tableau or similar tools.
* Experience with database and data model design for business intelligence and analytics solutions is an advantage but not a requirement.
* Experience with Google Insights/Google Analytics, Google Cloud Platform/BigQuery, Adobe Analytics and Salesforce is an advantage.
* Great communication skills in English.
A leader in personalized customer experiences
VML MAP is a world-leading Centre of Excellence that helps businesses humanize the relationship between the brand and the customer through hyper personalization at scale, marketing automation and CRM. With the brain of a consultancy, the heart of an agency and the power of technology and data, we work with some of the world's most admired brands to help them on their transformation journey to becoming truly customer-centric. Together, we are 1000 + technology specialists, data scientists, strategic thinkers, consultants, operations experts, and creative minds from 55+ nationalities.
A global network
We are part of the global VML network that encompasses more than 30,000 employees across 150+ offices in 60+ markets, each contributing to a culture that values connection, belonging, and the power of differences.
#LI-EMEA
WPP (VML MAP) is an equal opportunity employer and considers applicants for all positions without discrimination or regard to characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers.
For more information, please visit our website, and follow VML MAP on our social channels via Instagram, LinkedIn and X.
When you click "Apply now" below, your information is sent to VML MAP. To learn more about how we process your personal data during when you apply for a role with us, on how you can update your information or have the information removed please read our Privacy policy. California residents should read our California Recruitment Privacy Notice.
$78k-108k yearly est. 3d ago
AWS Data Migration Consultant
Slalom 4.6
Data engineer job in Austin, TX
Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies.
We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions.
As a key technical leader, you will work closely with dataengineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments.
What You'll Do
* Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters).
* Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools.
* Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques.
* Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud.
* Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS.
* Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards.
* Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK.
* Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools.
* Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms.
What You'll Bring
* 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2.
* Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2.
* Hands-on experience with AWS database services (RDS, EC2-hosted databases).
* Strong understanding of HA/DR solutions and cloud database design patterns.
* Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions.
* Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity.
* Strong troubleshooting and analytical skills to resolve complex database and performance issues.
* Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders.
Nice to Have
* AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional.
* Experience with NoSQL databases or hybrid data architectures.
* Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau).
* Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate).
* Experience with DB2 on-premise or cloud-hosted environments.
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations:
Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000.
In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000.
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process.
We will accept applications until 1/31/2026 or until the positions are filled.
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
HARDWARE & GAMING PLATFORM ENGINEER -
Silicon Validation & System Debug
THE ROLE:
Join AMD's Gaming and Hardware Platform Engineering team to lead system, silicon validation, and platform bring-up for next-generation gaming solutions. This role is hands-on, working in the lab with prototype boards, CPUs, GPUs, and memory subsystems to ensure high-performance, reliable gaming experiences. You will collaborate across silicon, firmware, and platform teams to debug complex issues and deliver robust solutions for gamers and developers worldwide.
KEY RESPONSIBILITIES:
* System & Platform Leadership
* Define and own system specifications for gaming platforms, including CPUs, GPUs, memory, I/O adapters, and power management.
* Lead cross-functional teams through all phases of hardware development-from design and bring-up to validation and production.
* Coordinate with silicon, firmware, and platform teams to ensure feature alignment and timely delivery.
* Silicon Debug & Validation
* Own post-silicon validation and debug for gaming and workstation boards.
* Perform board-level analysis: power sequencing, thermal testing, and signal integrity.
* Use advanced lab tools (oscilloscopes, logic analyzers, protocol analyzers) for root-cause analysis.
* Develop and execute comprehensive test plans for BIOS, drivers, silicon features, and OS certification.
* Drive defect prioritization and resolution plans during SOC bring-up and production phases.
* Technical Innovation
* Develop scripts and tools to automate hardware validation and improve test efficiency.
* Engage in pre-silicon emulation, simulation, and product engineering to ensure readiness for manufacturing and launch.
* Customer & Partner Engagement
* Collaborate with OEM/ODM partners on hardware integration and co-validation.
* Review platform designs against gaming workload requirements and optimize configurations.
* Release & Delivery
* Sign off on synchronized hardware releases for internal and external delivery.
* Oversee test execution and debug leadership across multiple platforms and teams.
REQUIRED TECH SKILLS:
* Strong experience in silicon bring-up and post-silicon debug.
* Hands-on proficiency with lab equipment for electrical and thermal testing.
* Deep knowledge of system architecture: x86 CPUs, GPUs, DDR5 memory, PCIe Gen5, and power management.
* Familiarity with firmware flashing, BIOS configuration, and hardware validation methodologies.
* Programming/scripting skills (Python, C/C++, Perl) for automation.
* Understanding of pre-silicon emulation, tapeout processes, and manufacturing test.
PREFERRED EXPERIENCE:
* Background in gaming platforms, high-performance computing, or workstation hardware.
* Expertise in SOC-level validation, IP debug, and performance optimization.
* Familiarity with Microsoft and Linux OS, virtualization (VMware, Xen), and certification processes.
* Excellent communication and leadership skills for cross-functional collaboration.
EDUCATION:
* Bachelor's or Master's degree in Electrical Engineering, Computer Engineering, Computer Science, or related discipline.
* 10+ years of industry experience, with at least 5 years focused on system or platform engineering and integration for gaming or high-performance hardware.
LOCATION:
Austin, TX
#LI-LM1
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.
This posting is for an existing vacancy.
$86k-118k yearly est. 53d ago
Senior Data Insights Consultant
VML 4.6
Data engineer job in Austin, TX
Do you have strong data and analytical skills? Can you identify, build, and apply data models to support marketing insights? Would you like an exciting job with plenty of opportunity to grow? Then you might be the Senior Data Insights Consultant we are looking for!
What will your day look like?
As our new Senior Data Insights Consultant, you will join our growing Business Insights team. Here, you will provide your data skills and business knowledge to support our clients in making data-driven decisions to improve their digital communication. This entails designing and specifying data solutions and data integrations for various campaign platforms.
More specifically, your tasks will include:
Identifying and designing value-adding insights solutions, leveraging data to optimize communication strategies.
Interpreting business requests and clarifying data requirements.
Assisting and advising on data models and scoping of new projects.
Providing answers and insights to business-related questions via automated reporting solutions as well as ad-hoc data analyses.
Analysing client prospect and customer data gleaning insights informing experience, content and performance optimizations.
Telling stories shaped and informed by data and your analysis.
Collaborating with highly skilled specialists including Account Managers, Architects, Developers, Creatives, Strategists, Data Scientists and Marketing Operation experts to service our clients coherently.
Promoting a data-driven agenda in a digital marketing context.
Who are you going to work with?
You will join a team of hands-on Data Analysts, Data Scientists, Consultants, and DataEngineers who are passionate about bringing value and knowledge from data. We are all about unlocking insights from data through analytics and making that insight applicable in 1:1 data-driven communication and CRM. Your work will always be firmly anchored in data in a cross-disciplinary setting, collaborating closely with highly enthusiastic experts.
What do you bring to the table?
As a person, you are outgoing and love being part of interdisciplinary projects and solutions. You are eager to learn and quick to understand the complexity of high-tech dialogues and solutions.
Furthermore, you have the drive, enthusiasm and technical skills to take lead when facing the client in data and insights related matters. Through this, you strive to help and inspire the client to grow their business by combining data insights, performance analytics, and dataengineering. It's an advantage, if you have agency experience and marketing domain knowledge, but it's not a requirement.
In addition, you have:
A minimum of 3-5 years of experience in a senior consultant/business liaison role related to data, BI, analytics or reporting solutions.
Hands-on experience working with SQL, databases, ETL and reporting, data analysis through R, Python or other similar toolsets.
Experience with report and dashboard development in Power BI/Tableau or similar tools.
Experience with database and data model design for business intelligence and analytics solutions is an advantage but not a requirement.
Experience with Google Insights/Google Analytics, Google Cloud Platform/BigQuery, Adobe Analytics and Salesforce is an advantage.
Great communication skills in English.
A leader in personalized customer experiences
VML MAP is a world-leading Centre of Excellence that helps businesses humanize the relationship between the brand and the customer through hyper personalization at scale, marketing automation and CRM. With the brain of a consultancy, the heart of an agency and the power of technology and data, we work with some of the world's most admired brands to help them on their transformation journey to becoming truly customer-centric. Together, we are 1000 + technology specialists, data scientists, strategic thinkers, consultants, operations experts, and creative minds from 55+ nationalities.
A global network
We are part of the global VML network that encompasses more than 30,000 employees across 150+ offices in 60+ markets, each contributing to a culture that values connection, belonging, and the power of differences.
#LI-EMEA
WPP (VML MAP) is an equal opportunity employer and considers applicants for all positions without discrimination or regard to characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers.
For more information, please visit our website, and follow VML MAP on our social channels via Instagram, LinkedIn
and X.
When you click “Apply now” below, your information is sent to VML MAP. To learn more about how we process your personal data during when you apply for a role with us, on how you can update your information or have the information removed please read our Privacy policy. California residents should read our California Recruitment Privacy Notice.
$80k-105k yearly est. Auto-Apply 5d ago
Software Data Engineer
Omni Federal 4.5
Data engineer job in Austin, TX
Job Description
Job Title: Software DataEngineer
Security Clearance: Active DoD Secret Clearance
We question. We listen. We adapt.
Be honest. Be pragmatic.
Omni Federal, a Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, Omni focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Omni Labs and SBIR Innovation centers. Omni has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO.
Why Omni?
Environment of Autonomy
Innovative Commercial Approach
People over process
We are seeking a passionate Software DataEngineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age.
Required Skills:
Active DoD Secret Clearance (Required)
4+ years of experience in data science, dataengineering, or similar roles.
Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow.
Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow).
Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes.
Proficiency in at least one programming language commonly used in dataengineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation.
CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant.
Nice to Have:
Familiarity with SBIR technologies and transformative platform shifts
Experience working in Agile or DevSecOps environments
2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin
, About Omni Federal
Omni Federal is a small business Defense Contractor focused on modern application development & deployment, cloud enablement, data analytics and DevSecOps services for the Federal government. Our past performance is a mix of commercial and federal business that allows us to leverage the latest commercial technologies and processes and adapt them to the Federal government. Omni Federal designs, builds and operates data-rich applications leveraging advanced data modeling, machine learning and data visualization techniques to empower our customers to make better data-driven decisions. We are on the forefront of Modernization and Automation, and are providing our Customers the option through our services to help them get to where they want to be, and ultimately the end-user.
$83k-112k yearly est. 29d ago
Senior Data Engineer
Icon Mechanical 4.8
Data engineer job in Austin, TX
ICON is seeking a Senior DataEngineer to join our Data Intelligence & Systems Architecture (DISA) team. This engineer will play a foundational role in shaping ICON's enterprise data platform within Palantir Foundry, owning the ingestion, modeling, and activation of data that powers reporting, decision-making, and intelligent automation across the company.
You will work closely with teams across Supply Chain & Manufacturing, Finance & Accounting, Human Resources, Software, Field Operations and R&D to centralize high-value data sources, model them into scalable assets, and enable business-critical use cases, ranging from real-time reporting to operations-focused AI/ML solutions. This is a highly cross-functional and technical role, ideal for someone with strong dataengineering skills, deep business curiosity, and a bias toward action. This role is based at ICON's headquarters in Austin, TX and reports to the Senior Director of Operations.
RESPONSIBILITIES:
Lead data ingestion and transformation pipelines within Palantir Foundry, integrating data from internal tools, SaaS platforms, and industrial systems
Model and maintain high-quality, governed data assets to support use cases in reporting, diagnostics, forecasting, and automation
Build analytics frameworks and operational dashboards that give teams real-time visibility into project progress, cost, equipment status, and material flow
Partner with business stakeholders and technical teams to translate pain points and questions into scalable data solutions
Drive the development of advanced analytics capabilities, including predictive maintenance, proactive purchasing workflows, and operations intelligence
Establish best practices for pipeline reliability, versioning, documentation, and testing within Foundry and across the data platform
Mentor team members and contribute to a growing culture of excellence in data and systems engineering
RESPONSIBILITIES:
8+ years of experience in dataengineering, analytics engineering, or backend software development
Bachelor's degree in Computer Science, DataEngineering, Software Engineering, or a related technical field.
Strong hands-on experience with Palantir Foundry, including Workshop, Code Repositories, Ontologies, and Object Models
Proficiency in Python and SQL for pipeline development and data modeling
Experience integrating data from APIs, machine data sources, ERP systems, SaaS tools, and cloud storage platforms
Strong understanding of data modeling principles, business logic abstraction, and stakeholder collaboration
Proven ability to independently design, deploy, and scale data products in fast-paced environments
PREFERRED SKILLS AND EXPERIENCE:
Experience supporting Manufacturing, Field Operations, or Supply Chain teams with near real-time analytics
Familiarity with platforms such as Procore, Coupa, NetSuite, or similar
Experience building predictive models or workflow automation in or on top of enterprise platforms
Background in data governance, observability, and maintaining production-grade pipelines
ICON is an equal opportunity employer committed to fostering an innovative, inclusive, diverse and discrimination-free work environment. Employment with ICON is based on merit, competence, and qualifications. It is our policy to administer all personnel actions, including recruiting, hiring, training, and promoting employees, without regard to race, color, religion, gender, sexual orientation, gender identity, national origin or ancestry, age, disability, marital status, veteran status, or any other legally protected classification in accordance with applicable federal and state laws. Consistent with the obligations of these laws, ICON will make reasonable accommodations for qualified individuals with disabilities.
Furthermore, as a federal government contractor, the Company maintains an affirmative action program which furthers its commitment and complies with recordkeeping and reporting requirements under certain federal civil rights laws and regulations, including Executive Order 11246, Section 503 of the Rehabilitation Act of 1973 (as amended) and the Vietnam Era Veterans' Readjustment Assistance Act of 1974 (as amended).
Headhunters and recruitment agencies may not submit candidates through this application. ICON does not accept unsolicited headhunter and agency submissions for candidates and will not pay fees to any third-party agency without a prior agreement with ICON.
As part of our compliance with these obligations, the Company invites you to voluntarily self-identify as set forth below. Provision of such information is entirely voluntary and a decision to provide or not provide such information will not have any effect on your employment or subject you to any adverse treatment. Any and all information provided will be considered confidential, will be kept separate from your application and/or personnel file, and will only be used in accordance with applicable laws, orders and regulations, including those that require the information to be summarized and reported to the federal government for civil rights enforcement purposes.
Internet Applicant Employment Notices
$82k-114k yearly est. Auto-Apply 31d ago
Sr. Data Engineer
Visa 4.5
Data engineer job in Austin, TX
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose - to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.
Job Description
Visa U.S.A. Inc., a Visa Inc. company, needs a Sr. DataEngineer (multiple openings) in Austin, Texas to:
Design, build, and launch efficient and reliable data pipelines to move data (both large and small amounts) in/out of Hadoop Data lake.
Architect, build, and launch new data pipelines and models that provide intuitive analytics to customers.
Design and develop new systems and tools to enable customers with data analysis and enhanced understanding of their data.
Create, automate, and scale repeatable analysis and build self service tools for business users.
Execute dataengineering projects ranging from small to large individually and as part of project team.
Work across multiple teams in high visibility roles and own the solution end-to-end.
Assist in scoping and designing analytic data assets.
Build and maintain robust dataengineering process to develop and implement self-serve data.
Perform other tasks on R&D, data governance, system infrastructure, and other cross team functions.
Position reports to the Austin, Texas office and may allow for partial telecommuting.
Qualifications
Basic Qualifications:
Employer will accept a Master's degree in Computer Science, Management Information Systems, or Business Analytics and 2 years of experience in Information Technology or DataEngineer-related occupation.
Position requires knowledge or experience in the following:
Proficient in utilizing Hadoop for distributed storage and large-scale data processing, enabling efficient handling of massive datasets.
Proficient in leveraging Apache Spark for large-scale data processing and advanced analytics, enabling both near real-time data insights and efficient batch processing.
Proficient in coding with PySpark, Scala, and Python for scalable data processing, transformation, and analysis, enabling seamless integration with big data frameworks.
Knowledge in designing, implementing, and managing both relational (RDBMS) and non-relational (NoSQL) database management systems to ensure efficient data storage, retrieval, and scalability.
Proficient in crafting and optimizing complex SQL queries for efficient data retrieval and manipulation, ensuring high performance and reducing query execution times.
Proficient in advanced database performance tuning, including indexing strategies and query optimization, to significantly enhance overall system efficiency and responsiveness.
Skilled in Presto, Impala, SparkSQL, and Hive for performing SQL-like querying on big data, facilitating extensive data analysis and reporting.
Skilled in data modeling and data warehousing techniques to design and implement efficient, scalable, and robust data storage solutions that support business intelligence initiatives.
Proficient in utilizing data visualization and business intelligence tools such as Tableau and Power BI to create insightful, interactive reports and dashboards that drive data-driven decision-making.
Additional Information
Worksite: Austin, Texas
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.
Travel Requirements:This position does not require travel.
Mental/Physical Requirements:This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is $110,700.00 to $171,800.00 USD per year, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.
$110.7k-171.8k yearly 4d ago
Backend Engineer
Harnham
Data engineer job in Austin, TX
About the Role
We are looking for a Backend Engineer to design, build, and operate a metrics platform that supports statistical evaluation at scale. You will own core components of the system, including data models, storage layouts, compute pipelines, and developer-facing frameworks for writing and testing metrics.
This role involves close collaboration with metric authors and metric consumers across engineering, analytics, and QA, ensuring that metric results are reliable, performant, and easy to use end to end.
What You'll Do
Own and evolve the metrics platform, including schemas, storage layouts optimized for high-volume writes and fast analytical reads, and clear versioning strategies
Build and maintain a framework for writing and running metrics, including interfaces, examples, local execution, and CI compatibility checks
Design and implement testing systems for metrics and pipelines, including unit, contract, and regression tests using synthetic and sampled data
Operate compute and storage systems in production, with responsibility for monitoring, debugging, stability, and cost awareness
Partner with metric authors and stakeholders across development, analytics, and QA to plan changes and roll them out safely
What You'll Need
Strong experience using Python in production, including asynchronous programming (e.g., asyncio, aiohttp, FastAPI)
Advanced SQL skills, including complex joins, window functions, CTEs, and query optimization through execution plan analysis
Solid understanding of data structures and algorithms, with the ability to make informed performance trade-offs
Experience with databases, especially PostgreSQL (required); experience with ClickHouse is a strong plus
Understanding of OLTP vs. OLAP trade-offs and how schema and storage decisions affect performance
Experience with workflow orchestration tools such as Airflow (used today), Prefect, Argo, or Dagster
Familiarity with data libraries and validation frameworks (NumPy, pandas, Pydantic, or equivalents)
Experience building web services (FastAPI, Flask, Django, or similar)
Comfort working with containers and orchestration tools like Docker and Kubernetes
Experience working with large-scale datasets and data-intensive systems
Nice to Have
Ability to read and make small changes in C++ code
Experience building ML-adjacent metrics or evaluation infrastructure
Familiarity with Parquet and object storage layout/partitioning strategies
Experience with Kafka or task queues
Exposure to basic observability practices (logging, metrics, tracing)
$71k-98k yearly est. 1d ago
Big Data Consultant (Full-Time Position)
Sonsoft 3.7
Data engineer job in Austin, TX
SonSoft Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. SonSoft Inc is growing at a steady pace specializing in the fields of Software Development, Software Consultancy, and Information Technology Enabled Services.
Job Description
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and microservice architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Must have experience working on Big Data Processing Frameworks and Tools - MapReduce, YARN, Hive, Pig.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats - Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools - Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills - UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.
Preferred
Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred
Experience and desire to work in a Global delivery environment
Qualifications
Bachelor's degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education.
At least 5 years of Design and development experience in Big data, Java or Data warehousing related technologies.
At least 3 years of hands-on design and development experience on Big data related technologies - PIG, Hive, MapReduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting.
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs.
Should be able to work in team in diverse/ multiple stakeholder environment.
Additional Information
Connect with me at ******************************************* (
For Direct Clients Requirements
)
** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time.
Note:-
This is a Full-Time & Permanent job opportunity for you.
Only US Citizen, Green Card Holder, GC-EAD, H4-EAD & L2-EAD can apply.
No OPT-EAD, H1B & TN candidates, please.
Please mention your Visa Status in your email or resume.
** All your information will be kept confidential according to EEO guidelines.
$76k-105k yearly est. 60d+ ago
Data Scientist, Product Analytics
Airtable 4.2
Data engineer job in Austin, TX
Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done.
Airtable is seeking a product-focused Data Scientist to join our Analytics & Data Science team. In this high-impact role, you'll partner closely with product development teams to transform raw user data into actionable insights that drive growth for Airtable's self-serve business. You'll own critical data pipelines, design and analyze experiments, build dashboards, and deliver strategic insights that inform executive decision-making. This is a unique opportunity to shape the future of a data-driven, AI-native SaaS company and scale analytics best practices across the organization.
What you'll do
Own and maintain core product data pipelines across DBT, Looker, and Omni, ensuring reliability, scalability, and minimal downtime
Build and refine dashboards that deliver self-serve, real-time insights for high-priority product areas
Lead the development and delivery of company-wide strategic insights that connect user behavior patterns and inform executive decisions
Partner with product and engineering teams to define tracking requirements, implement instrumentation, validate data, and deliver launch-specific dashboards or reports
Establish trusted partnerships with product managers, engineers, analysts, and leadership as the go-to resource for product data insights and technical guidance
Collaborate with leadership to define the analytics roadmap, prioritize high-impact initiatives, and assess resource needs for scaling product analytics capabilities
Mentor junior team members and cross-functional partners on analytics best practices and data interpretation; create documentation and training materials to scale institutional knowledge
Support end-to-end analytics for all product launches, including tracking implementation, validation, and post-launch reporting with documented impact measurements
Deliver comprehensive strategic analyses or experiments that connect user behavior patterns and identify new growth opportunities
Lead or participate in cross-functional projects where data science contributions directly influence product or strategy decisions
Migrate engineering team dashboards to Omni or Databricks, enabling self-serve analytics
Who you are
Bachelor's degree in computer science, data science, mathematics/statistics, or related field
6+ years of experience as a data scientist, data analyst, or dataengineer
Experience supporting product development teams and driving product growth insight
Background in SaaS, consumer tech, or data-driven product environments preferred
Expert in SQL and modern data modeling (e.g., dbt, Databricks, Snowflake, BigQuery); sets standards and mentors others on best practices
Deep experience with BI tools and modeling (e.g., Looker, Omni, Hex, Tableau, Mode)
Proficient with experimentation platforms and statistical libraries (e.g., Eppo, Optimizely, LaunchDarkly, scipy, statsmodels)
Proven ability to apply AI/ML tools - from core libraries (scikit-learn, PyTorch, TensorFlow) to GenAI platforms (ChatGPT, Claude, Gemini) and AI-assisted development (Cursor, GitHub Copilot)
Strong statistical foundation; designs and scales experimentation practices that influence product strategy and culture
Translates ambiguous business questions into structured analyses, guiding teams toward actionable insights
Provides thought leadership on user funnels, retention, and growth analytics
Ensures data quality, reliability, and consistency across critical business reporting and analytics workflows
Experience at an AI-native company, with exposure to building or scaling products powered by AI
Knowledge of product analytics tracking frameworks (e.g., Segment, Amplitude, Mixpanel, GA4) and expertise in event taxonomy design
Strong documentation and knowledge-sharing skills; adept at creating technical guides, playbooks, and resources that scale team effectiveness
Models curiosity, creativity, and a learner's mindset; thrives in ambiguity and inspires others to do the same
Crafts compelling narratives with data, aligning stakeholders at all levels and driving clarity in decision-making
Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive. We welcome people of different backgrounds, experiences, abilities, and perspectives. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or any characteristic protected by applicable federal and state laws, regulations and ordinances. Learn more about your EEO rights as an applicant.
VEVRAA-Federal Contractor
If you have a medical condition, disability, or religious belief/practice which inhibits your ability to participate in any part of the application or interview process, please complete our Accommodations Request Form and let us know how we may assist you. Airtable is committed to participating in the interactive process and providing reasonable accommodations to qualified applicants.
Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience.
Our total compensation package also includes the opportunity to receive benefits, restricted stock units, and may include incentive compensation. To learn more about our comprehensive benefit offerings, please check out Life at Airtable.
For work locations in the San Francisco Bay Area, Seattle, New York City, and Los Angeles, the base salary range for this role is:$205,200-$266,300 USDFor all other work locations (including remote), the base salary range for this role is:$185,300-$240,000 USD
Please see our Privacy Notice for details regarding Airtable's collection and use of personal information relating to the application and recruitment process by clicking here.
🔒 Stay Safe from Job Scams
All official Airtable communication will come from an @airtable.com email address. We will never ask you to share sensitive information or purchase equipment during the hiring process. If in doubt, contact us at ***************. Learn more about avoiding job scams here.
The average data engineer in Austin, TX earns between $67,000 and $123,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Austin, TX
$91,000
What are the biggest employers of Data Engineers in Austin, TX?
The biggest employers of Data Engineers in Austin, TX are: