Post job

Data engineer jobs in Pflugerville, TX - 1,733 jobs

All
Data Engineer
Data Consultant
Data Scientist
Data Architect
Requirements Engineer
Senior Software Engineer
Game Engineer
  • Delivery Consultant- GenAI/ML & Data Science, AWS Industries

    Amazon 4.7company rating

    Data engineer job in Austin, TX

    Application deadline: Jan 16, 2026 The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing, implementing, and building complex, scalable, and secure GenAI and ML applications and models built on AWS tailored to customer needs - Providing technical guidance and implementation support throughout project delivery, with a focus on using AWS AI/ML services - Collaborating with customer stakeholders to gather requirements and propose effective model training, building, and deployment strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team About AWS: Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneeredcloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companiestrust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Basic Qualifications - Experience developing software code in one or more programming languages (java, python, etc.) - PhD or Masters of Science degree in Computer Science, or related technical, math, or scientific field (or equivalent experience) - 5+ years of cloud based solution (AWS or equivalent), system, network and operating system experience - 5+ years of experience hosting and deploying GenAI/ML solutions (e.g., for data pre-processing, training, deep learning, fine tuning, and inferences) or/and Data Science Experience - 5+ years of coding, data querying languages (e.g. SQL), scripting languages (e.g. Python) Preferred Qualifications - Knowledge of AWS platform and tools or equivalent cloud experience. Ideally, the candidate has AWS Experience with a proficiency in a wide range of AWS services (e.g. SageMaker, Bedrock, EMR, S3, OpenSearch Service, Step Functions, Lambda, and EC2) - AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional, Machine Learning Specialty) preferred - Hands on experience with deep learning (e.g., CNN, RNN, LSTM, Transformer), machine learning, CV, GNN, or distributed training - Experience with coding, automation and scripting (e.g., Terraform, Python) - Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company's reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. The base pay range for this position is listed below. Hourly pay ranges include the base pay rate plus the highest available shift differential which applies depending on the shift you select. Colorado $131,300 - $177,600 annually / hourly National $118,200 - $204,300 annually / hourly For salaried roles, your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at ***************************************************** . For hourly roles, as a total compensation company, you are eligible for additional earnings including overtime pay and performance bonuses. Final pay will be based on factors including shift selection and location. Starting Day 1 of employment, Amazon offers EAP, Mental Health Support, Medical Advice Line, 401(k) matching. Learn more about our benefits at ********************************************* .
    $131.3k-177.6k yearly 5d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • SAP Data Architect

    Excelon Solutions 4.5company rating

    Data engineer job in Austin, TX

    Tittle: SAP Data Architect Mode: Fulltime Expectations / Deliverables for the Role Builds the SAP data foundation by defining how SAP systems store, share, and manage trusted enterprise data. Produces reference data architectures by leveraging expert input from application, analytics, integration, platform, and security teams. These architectures form the basis for new solutions and enterprise data initiatives. Enables analytics and AI use cases by ensuring data is consistent, governed, and discoverable. Leverages SAP Business Data Cloud, Datasphere, MDG and related capabilities to unify data and eliminate duplicate data copies. Defines and maintains common data model catalogs to create a shared understanding of core business data. Evolves data governance, ownership, metadata, and lineage standards across the enterprise. Protects core transactional systems by preventing excessive replication and extraction loads. Technical Proficiency Strong knowledge of SAP master and transactional data domains. Hands-on experience with SAP MDG, Business Data Cloud, BW, Datasphere, or similar platforms. Expertise in data modeling, metadata management, data quality, and data governance practices. Understanding of data architectures that support analytics, AI, and regulatory requirements. Experience integrating SAP data with non-SAP analytics and reporting platforms. Soft Skills Ability to align data, and engineering teams around a shared data vision, drive consensus on data standards and decisions Strong facilitation skills to resolve data ownership and definition conflicts. Clear communicator who can explain architecture choices, trade-offs, and cost impacts to stakeholders. Pragmatic mindset focused on value, reuse, and simplification. Comfortable challenging designs constructively in ARB reviews
    $92k-124k yearly est. 4d ago
  • Data Analytics Engineer

    Harnham

    Data engineer job in Austin, TX

    About the Role We are looking for a Data Analytics Engineer who sits at the intersection of data engineering and analytics. In this role, you will transform raw, messy data from vehicles, APIs, and operational systems into clean, reliable datasets that are trusted and widely used-from engineering teams to executive leadership. You will own data pipelines end to end and build dashboards that surface insights, track performance, and help teams quickly identify issues. What You'll Do Build and maintain ETL pipelines that ingest data from diverse internal systems into a centralized analytics warehouse Work with unique and high-volume datasets, including vehicle telemetry, sensor-derived signals, logistics data, and system test results Write efficient, well-structured SQL to model and prepare data for analysis and reporting Design, build, and maintain dashboards (e.g., Grafana or similar) used to monitor system performance and operational health Partner closely with engineering, operations, and leadership teams to understand data needs and deliver actionable datasets Explore internal AI- and LLM-based tools to automate analysis and uncover new insights What You'll Need Strong hands-on experience with Python and data libraries such as pandas, Polars, or similar Advanced SQL skills, including complex joins, window functions, and query optimization Proven experience building and operating ETL pipelines using modern data tooling Experience with BI and visualization tools (e.g., Grafana, Tableau, Looker) Familiarity with workflow orchestration tools such as Airflow, Dagster, or Prefect High-level understanding of LLMs and interest in applying them to data and analytics workflows Strong ownership mindset and commitment to data quality and reliability Nice to Have Experience with ClickHouse or other analytical databases (e.g., Snowflake, BigQuery, Redshift) Background working with vehicle, sensor, or logistics data Prior experience in autonomous systems, robotics, or other data-intensive hardware-driven domains
    $78k-106k yearly est. 2d ago
  • CSV Engineer

    Advantage Technical

    Data engineer job in Austin, TX

    Computer System Validation Engineer Contract: 6 months Hours: 8-5 (flexible as needed) Rate: $50-$55/hr Sponsorship: Not available No C2C or agency referrals Maintain and improve quality engineering programs, ensuring compliance with regulatory standards and company policies. Lead and support computer system validation (CSV) activities across software and IT systems. Key Responsibilities: Define and implement validation strategy for the V&V program. Complete, review, and approve all CSV deliverables. Conduct assessments including Validation Applicability, 21CFRPart11, and Software Functionality Risk Assessments. Review test cases, executed protocols, and reports to ensure proper coverage and compliance with validation standards. Provide guidance on incident/defect handling and risk mitigation. Review and update SOPs, templates, and validation documentation. Support internal and external audits. Develop and maintain quality standards, inspection/testing procedures, and corrective actions. Compile and report quality data; liaise with Product Engineers, Quality Program Managers, and regulatory bodies as needed. Qualifications: Experience in computer system validation, software V&V, and regulatory compliance. Strong knowledge of 21CFRPart11, SOPs, and CSV documentation. Ability to conduct risk assessments and provide independent validation reviews. Excellent communication and training skills.
    $50-55 hourly 2d ago
  • Senior Software Engineer

    Engtal

    Data engineer job in Austin, TX

    Software Engineer - Backend Systems (Energy Infrastructure Platform) We're working with a bold, mission-driven power company that's rebuilding the backbone of the American energy system. By combining software, distributed infrastructure, and engineering speed, they are rolling out a next-gen platform that strengthens the grid and empowers communities nationwide. If you're passionate about transforming critical infrastructure and building software that touches the real world, this is your moment. The team is developing the operating system powering their entire distributed energy platform. From orchestrating fleets of batteries and real-time grid response to managing field deployments and automating logistics, this is the digital foundation for the future of American power. As a backend software engineer, you'll help scale the core systems that bring this infrastructure to life. What You'll Do: Design core backend services and scalable APIs that control everything from grid balancing to field ops. Own the stack: Work primarily in Go and Python, deploying into modern AWS-based cloud environments. Use Terraform to define infrastructure-as-code and Temporal to orchestrate complex workflows. Collaborate across disciplines-deployment, hardware, and operations-to build high-reliability systems that cross physical and digital boundaries. Every line of code you write will have real-world impact on homes, businesses, and the grid. What You Bring: 2+ years of professional experience in backend software development (open to multiple seniority levels). Proficiency in Go, Python, Java, or similar languages. Familiarity with AWS, Docker, Kubernetes, or other modern cloud and container technologies. Passion for clean, maintainable code and solid testing practices. Strong product mindset-you're thinking about the impact and user outcome, not just the implementation. This is your chance to build systems that don't just ship a product, they power a nationwide platform that's redefining how energy works in America. You'll shape system behavior from silicon to deployment, partner with elite engineers, and lead innovation in one of the most critical and dynamic sectors of our time.
    $88k-117k yearly est. 3d ago
  • Data Enablement Consultant - Transformation Programs

    Eclerx Services

    Data engineer job in Austin, TX

    Type: Fixed Term (6 months) eligible for employee benefits Department: Technology We are looking for a strategic and collaborative Data Enablement Consultant to support enterprise-wide transformation programs by identifying, defining, and enabling access to order related data. This role will work closely with cross-functional teams-including business units, technology teams, and data platform owners-to ensure the right data is available, accessible, and trusted across transformation initiatives. You will play a critical role in bridging the gap between data consumers and data producers, accelerating the delivery of transformation outcomes through well-governed and fit-for-purpose data. Responsibilities * Partner with transformation program teams to understand business objectives and identify order-related data needs (e.g., customer, product, finance, supplier, inventory, employee, etc.). * Work across business units to gather requirements, map data dependencies, and prioritize data enablement initiatives. * Collaborate with data engineering and data governance teams to ensure relevant data sources are ingested, modeled, and made available in enterprise data platforms. * Lead efforts to catalog, document, and communicate newly enabled datasets, ensuring alignment with data governance and metadata standards. * Develop and maintain a backlog of data enablement workstreams linked to key transformation milestones. * Facilitate workshops and discovery sessions with cross-functional stakeholders to uncover hidden or siloed data critical to transformation programs. * Serve as the liaison between business users and technical teams to ensure data needs are well-understood, translated into technical requirements, and delivered appropriately. * Monitor the usage and adoption of newly enabled data assets and address data quality or accessibility issues as needed. * Support data literacy by promoting understanding of newly enabled data and how it can be used effectively in the context of business transformation. Eligibility Requirements * Bachelor's degree in Information Systems, Data Science, Business, or related field. * 5+ years of experience in data management, data enablement, or analytics roles with cross-functional collaboration. * Proven experience working on or supporting large-scale transformation or change programs. * Strong understanding of data domains outside of order-related data (e.g., master data, financial data, customer data, supplier data, etc.). * Experience working with modern data platforms and tools (e.g., Snowflake, BigQuery, Power BI, Tableau, Collibra, Alation). * Strong stakeholder management and facilitation skills across business and technical teams. * Knowledge of data governance, metadata management, and data cataloging practices. * Preferred Qualifications: * Experience in a matrixed or federated data organization. * Familiarity with enterprise transformation methodologies or frameworks. * Understanding of data architecture concepts and enterprise data modeling. * Experience with Agile delivery environments and tools (e.g., Jira, Confluence). In the US, the target base salary for this role is $150,000-$200,000. Compensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications, and geography. We expect the majority of candidates who are offered roles at our company to fall throughout the range based on these factors What We Offer * Competitive salary and performance bonuses * Flexible working hours * Career growth opportunities and ongoing training * Inclusive, supportive company culture How to Apply * Click "Apply Now" to submit your resume through our career site * Be sure to include any relevant experience that aligns with the role. * Qualified candidates will be contacted by a member of our recruitment team for next steps About eClerx eClerx is a leading provider of productized services, bringing together people, technology and domain expertise to amplify business results. The firm provides business process management, automation, and analytics services to a number of Fortune 2000 enterprises, including some of the world's leading financial services, communications, retail, fashion, media & entertainment, manufacturing, travel & leisure, and technology companies. Incorporated in 2000, eClerx is traded on both the Bombay and National Stock Exchanges of India. The firm employs more than 19,000 people across Australia, Canada, France, Germany, Switzerland, Egypt. India, Italy, Netherlands, Peru, Philippines, Singapore, Thailand, the UK, and the USA. For more information, visit ************** You can also find us on: **************************************** *************************************** ******************************** eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. We are also committed to protecting and safeguarding your personal data. Please find our policy here
    $150k-200k yearly Auto-Apply 40d ago
  • Tools Developer, Data Scientist

    Sustainable Talent

    Data engineer job in Austin, TX

    Sustainable Talent is partnering with Nvidia a global leader who's been transforming computer graphics, PC gaming, and accelerated computing for over 25 years. We are looking for a Tools Developer, Data Scientist to support our client's Systems Product team. This is a W-2 full-time contract based onsite in Santa Clara, CA, Austin, Tx, or Pheonix, AZ with hybrid work options. We offer competitive pay $65/hr - $75/hr based on factors like experience, education, location, etc. and provide full benefits, PTO, and amazing company culture! We are seeking a skilled Tools Developer/Data Scientist/Data Analyst to develop innovative solutions that enable teams within the Systems Product Team (SPT) organization to successfully execute product bring-up activities. This role also involves creating visually compelling, user-friendly dashboards that support strategic executive decision-making. Your work will focus on a wide range of data sources, including engineering logs, factory shop floor metrics, inventory data, material inputs, and production planning. You will contribute to high-impact projects that are critical to new product introductions, as well as key supply chain and manufacturing decisions. What You'll Be Doing: Build tools to be used by various engineering and program management teams. Work with raw log data (e.g., from SPLUNK, SAP other DB) Analyze engineering, product bring up execution data and other Systems Products Team data: Build plan NPI Supply/Demand/Allocation Factory Shop Floor and test data Lab asset data Collaborate with other tool teams member to create and deploy solutions (i.e. automated workflows, insightful dashboards) Big Data Analytics & Data Visualization Specialists Core Skills & Qualifications: Strong programming skills in data manipulation, analysis, and automation (Python, or similar) Experience with data processing libraries (pandas, numpy, ect) Experience with data visualization libraries (matplotlib, plotly, ect) Experience in big data analytics and data visualization. Proficiency with tools such as Splunk, Tableau, or Power BI. Strong SQL skills and experience with data modeling. Background in data processing pipelines (Spark, Kafka, or similar a plus). Preferred Experience: Background in Statistical analysis (Regressions, hypothesis testing, probability distributions, ect) Experience in Data preparation (ETL processes, data transformation, ect) Experience in cybersecurity-related analytics or operational data reporting. Building and automating dashboards for executive and technical audiences Sustainable Talent is a M/F+, disabled, and veteran equal employment opportunity and affirmative action employer.
    $65-75 hourly Auto-Apply 60d+ ago
  • Data Scientist

    Inkind 3.6company rating

    Data engineer job in Austin, TX

    Job Title: Data Scientist Role Summary: As a Data Scientist at in Kind, you will be responsible for gathering and analyzing large datasets, extracting actionable insights, and using those insights to drive predictive forecasts as well as automated and ad hoc business actions and decision making. You will work closely with our data team and cross-functional stakeholders including marketing, finance, and product, to deliver recommendations, and models that will drive decision-making, automated actions, innovation, and enhance platform performance. This role requires a strong technical background in SQL, Python, Snowflake (Cortex AI), data modeling, and ETL processes, along with a deep understanding of analytics best practices. Because of in Kind's position as a fast growing start-up, this role will also require working with urgency and full commitment, and delivering solutions that drive large impacts at the platform level. Key Responsibilities Understanding - Develop a deep analytic understanding of in Kind's product that will serve as the backbone for developing data-science driven models to innovate, automate, and optimize user acquisition, engagement, and profitability. In addition, thoroughly understand the qualitative context underlying in Kind's historical data and macro business goals and take those factors into account within the data science process. Continuously validate assumptions through experimentation and real-world performance of models. Underlying Data - Gather data from various sources, then clean, combine, transform, and prepare for analysis, modeling, and forecasting. Design datasets for optimal feature engineering. Analysis - Analyze large datasets to uncover trends, patterns, and opportunities using statistical methods, predictive modeling, AI tools, and machine learning. Develop and refine models to improve forecasting and power both automated and ad hoc decision-making. Continually update and refine data models to drive better results. Results - Collaborate with cross-functional business teams including data, marketing, product, and finance to translate business needs into robust and actionable solutions. Develop and deploy automated, repeatable, and scalable data-driven actions and processes. Integrate outputs into bespoke tests, marketing campaigns and product workflows. Communication - Present findings, work product, and recommendations to both technical and non-technical stakeholders Systems - Work with cloud-based data warehouses (e.g., Snowflake, BigQuery, Redshift) and analytics tools (e.g., dbt, Sigma, Looker, Tableau) in addition to standing up a data science tool stack Minimum Qualifications: 3-5+ years of experience in data science, analytics engineering, data engineering, or related role Strong Python, SQL, R, and Snowflake Cortex AI experience Robust analytical abilities with the capacity to gather, arrange, and organize, a substantial amount of information with accuracy and attention to detail Understanding of data modeling principles (Kimball, Star Schema, etc.) Experience with supervised learning methods (Random Forest, Gradient Boosting, and regression models), including feature engineering and model evaluation Familiarity with BI tools like Sigma, Looker, Power BI, Tableau, or Mode Analytics. Strong problem-solving skills The ability to work cross-functionally and communicate effectively with technical and non-technical teams Passion for data-driven decision-making and improving business outcomes Must love restaurants and hospitality Preferred Qualifications: Bachelor's degree in computer science, data science, mathematics, information sciences, statistics or related field or equivalent work experience Experience with feature engineering and supervised learning models Experience building and maintaining ETL pipelines Previous startup experience Work Environment: The candidate in this position will be located in an office environment and be exposed to lighting, temperature, and noise level conditions typical of that environment This position is in-person, with an expectation of in-person work at the physical office location in downtown Austin, Texas Some of our Benefits: Generous PTO and company holiday policy + company paid Short Term Disability 100% employer covered health and dental insurance for our direct employees (a set plan is covered, with higher tier healthcare coverage available at employee's additional cost; dependent coverage is at employee's cost); vision plan available at employee's additional cost Child Care Benefits and generous parental leave Dog-friendly workspace in a secure building with great views of downtown Austin Daily lunches and snacks Salary: $120,000 - $140,000 DOE in Kind is an Equal Opportunity Employer. We believe that diversity is vital to in Kind's ability to provide our clients with the best recommendations and are committed to fostering a varied and inclusive work environment. Your race, color, ancestry, religion, gender, gender identity, national origin, sexual orientation, age, marital status, disability status, veteran status, or any other protected category have no bearing on our hiring decisions. Read our Privacy Policy.
    $120k-140k yearly Auto-Apply 60d+ ago
  • Data Scientist

    Victory 3.9company rating

    Data engineer job in Austin, TX

    We are looking for a skilled Data Scientist who will help us analyze large amounts of raw information to find patterns and use them to optimize our performance. You will build data products to extract valuable business insights, analyze trends and help us make better decisions. We expect you to be highly analytical with a knack for analysis, math and statistics, and a passion for machine-learning and research. Critical thinking and problem-solving skills are also required. Data Scientist responsibilities are: Research and detect valuable data sources and automate collection processes Perform preprocessing of structured and unstructured data Design, implement and deliver maintainable and high-quality code using best practices (e.g. Git/Github, Secrets, Configurations, Yaml/JSON) Review large amounts of information to discover trends and patterns Create predictive models and machine-learning algorithms Modify and combine different models through ensemble modeling Organize and present information using data visualization techniques Develop and suggest solutions and strategies to business challenges Work together with engineering and product development teams Data Scientist requirements are: 3+ years' experience of working on Data Scientist or Data Analyst position Significant experience in data mining, machine-learning and operations research Experience with data modeling, design patterns, building highly scalable and secured solutions preferred Prior experience installing data architectures on Cloud providers (e.g. AWS,GCP,Azure), using DevOps tools and automating data pipelines Good experience using business intelligence/visualization tools (such as Tableau), data frameworks (such as Hadoop, DataFrames, RDDs, Dataclasses) and data formats (CSV, JSON, Parquet, Avro, ORC) Advanced knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset MA or PhD degree in Computer Science, Engineering or other relevant area; graduate degree in Data Science or other quantitative field is preferred Must be a U.S. Citizen
    $73k-101k yearly est. Auto-Apply 60d+ ago
  • Collibra Data Governance Consultant with mulesoft

    Tekskills 4.2company rating

    Data engineer job in Austin, TX

    Contract duration (in months)* 4 Months Implementation Partner: Infosys Must Have Skills (Top 3 technical skills only) * 1. Collibra Data Governance 2. Collibra Connect with Mulesoft Detailed Job Description: Collibra Data Governance with experience in Collibra connect for Mulesoft. Desired years of experience*: Above 5 years Education/ Certifications (Required): BE Top 3 responsibilities you would expect the Subcon to shoulder and execute*: 1. Interact with business users to get the requirements 2. Configure the data governance structure in Collibra 3. Review deliverables from Offshore Nagarjuna. G Sr.Technical Recruiter Phone: ************ Additional Information All your information will be kept confidential according to EEO guidelines.
    $78k-109k yearly est. 60d+ ago
  • Senior Data Insights Consultant

    VMLY&R

    Data engineer job in Austin, TX

    Do you have strong data and analytical skills? Can you identify, build, and apply data models to support marketing insights? Would you like an exciting job with plenty of opportunity to grow? Then you might be the Senior Data Insights Consultant we are looking for! What will your day look like? As our new Senior Data Insights Consultant, you will join our growing Business Insights team. Here, you will provide your data skills and business knowledge to support our clients in making data-driven decisions to improve their digital communication. This entails designing and specifying data solutions and data integrations for various campaign platforms. More specifically, your tasks will include: * Identifying and designing value-adding insights solutions, leveraging data to optimize communication strategies. * Interpreting business requests and clarifying data requirements. * Assisting and advising on data models and scoping of new projects. * Providing answers and insights to business-related questions via automated reporting solutions as well as ad-hoc data analyses. * Analysing client prospect and customer data gleaning insights informing experience, content and performance optimizations. * Telling stories shaped and informed by data and your analysis. * Collaborating with highly skilled specialists including Account Managers, Architects, Developers, Creatives, Strategists, Data Scientists and Marketing Operation experts to service our clients coherently. * Promoting a data-driven agenda in a digital marketing context. Who are you going to work with? You will join a team of hands-on Data Analysts, Data Scientists, Consultants, and Data Engineers who are passionate about bringing value and knowledge from data. We are all about unlocking insights from data through analytics and making that insight applicable in 1:1 data-driven communication and CRM. Your work will always be firmly anchored in data in a cross-disciplinary setting, collaborating closely with highly enthusiastic experts. What do you bring to the table? As a person, you are outgoing and love being part of interdisciplinary projects and solutions. You are eager to learn and quick to understand the complexity of high-tech dialogues and solutions. Furthermore, you have the drive, enthusiasm and technical skills to take lead when facing the client in data and insights related matters. Through this, you strive to help and inspire the client to grow their business by combining data insights, performance analytics, and data engineering. It's an advantage, if you have agency experience and marketing domain knowledge, but it's not a requirement. In addition, you have: * A minimum of 3-5 years of experience in a senior consultant/business liaison role related to data, BI, analytics or reporting solutions. * Hands-on experience working with SQL, databases, ETL and reporting, data analysis through R, Python or other similar toolsets. * Experience with report and dashboard development in Power BI/Tableau or similar tools. * Experience with database and data model design for business intelligence and analytics solutions is an advantage but not a requirement. * Experience with Google Insights/Google Analytics, Google Cloud Platform/BigQuery, Adobe Analytics and Salesforce is an advantage. * Great communication skills in English. A leader in personalized customer experiences VML MAP is a world-leading Centre of Excellence that helps businesses humanize the relationship between the brand and the customer through hyper personalization at scale, marketing automation and CRM. With the brain of a consultancy, the heart of an agency and the power of technology and data, we work with some of the world's most admired brands to help them on their transformation journey to becoming truly customer-centric. Together, we are 1000 + technology specialists, data scientists, strategic thinkers, consultants, operations experts, and creative minds from 55+ nationalities. A global network We are part of the global VML network that encompasses more than 30,000 employees across 150+ offices in 60+ markets, each contributing to a culture that values connection, belonging, and the power of differences. #LI-EMEA WPP (VML MAP) is an equal opportunity employer and considers applicants for all positions without discrimination or regard to characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers. For more information, please visit our website, and follow VML MAP on our social channels via Instagram, LinkedIn and X. When you click "Apply now" below, your information is sent to VML MAP. To learn more about how we process your personal data during when you apply for a role with us, on how you can update your information or have the information removed please read our Privacy policy. California residents should read our California Recruitment Privacy Notice.
    $78k-108k yearly est. 4d ago
  • Data Scientist, GTM Analytics

    Airtable 4.2company rating

    Data engineer job in Austin, TX

    Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Our data team's mission is to fuel Airtable's growth and operations. We are a strategic enabler, by building high-quality and customer-centric data products and solutions. We are looking for a Data Scientist to work directly with Airtable's business stakeholders. Your data product will be instrumental in accelerating the efficiency of Customer Engagement (CE) organizations including sales, CSG and revenue operations teams. This role offers the opportunity to significantly impact Airtable's strategy and go-to-market execution, providing you with a platform to deploy your data skills in a way that directly contributes to our company's growth and success. What you'll do Champion AI Driven Data Product with Scalability: Design and implement ML models and AI solutions to enable CE team with actionable insights and recommendations. Build scalable data pipelines and automated workflows with MLOps best practices. Support Key Business Processes: Provide strategic insights, repeatable frameworks and thought partnership independently to support key CE business processes like territory carving, annual planning, pricing optimization and performance attribution, etc.. Strategic Analysis: Drive in-depth deep-dive analysis to ensure accuracy and relevance. Influence the business stakeholders with a good story telling of the data. Tackle ambiguous problems to uncover business value with minimal oversight. Develop Executive Dashboards: Design, build, and maintain high-quality dashboards and BI tools. Partner with Revenue Operations team to enable vast roles of CE team efficiently with the data products. Strong Communication Skills: Effectively communicate the “so-what” of an analysis, illustrating how insights can be leveraged to drive business impact across the organization. Who you are Education: Bachelor degree in a quantitative discipline (Math, Statistics, Operations Research, Economics, Engineering, or CS), MS/MBA preferred. Industry Experience: 4+ years of working experience as a data scientist / analytics engineer in high-growth B2B SaaS, preferably supporting sales, CSG or other go-to-market stakeholders. Demonstrated business acumen with a deep understanding of Enterprise Sales strategies (sales pipeline, forecast models, sales capacity, sales segmentation, quota planning), CSG strategies (customer churn risk model, performance attribution) and Enterprise financial metrics (ACV, ARR, NDR) Familiar with CRM platforms (i.e., Salesforce) Technical Proficiency: 6+ years of experience working with SQL in modern data platforms, such as Databricks, Snowflake, Redshift, BigQuery 6+ years of experience working with Python or R for analytics or data science projects 6+ years of experience building business facing dashboards and data models using modern BI tools like Looker, Tableau, etc. Proficient-level experience developing automated solutions to collect, transform, and clean data from various sources, by using tools such as dbt, Fivetran Proficient knowledge of data science models, such as regression, classification, clustering, time series analysis, and experiment design Hands-on experience with batch LLM pipeline is preferred Excellent communication skills to present findings to both technical and non-technical audiences. Passionate to thrive in a dynamic environment. That means being flexible and willing to jump in and do whatever it takes to be successful. Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive. We welcome people of different backgrounds, experiences, abilities, and perspectives. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or any characteristic protected by applicable federal and state laws, regulations and ordinances. Learn more about your EEO rights as an applicant. VEVRAA-Federal Contractor If you have a medical condition, disability, or religious belief/practice which inhibits your ability to participate in any part of the application or interview process, please complete our Accommodations Request Form and let us know how we may assist you. Airtable is committed to participating in the interactive process and providing reasonable accommodations to qualified applicants. Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience. Our total compensation package also includes the opportunity to receive benefits, restricted stock units, and may include incentive compensation. To learn more about our comprehensive benefit offerings, please check out Life at Airtable. For work locations in the San Francisco Bay Area, Seattle, New York City, and Los Angeles, the base salary range for this role is:$179,500-$221,500 USDFor all other work locations (including remote), the base salary range for this role is:$161,500-$199,300 USD Please see our Privacy Notice for details regarding Airtable's collection and use of personal information relating to the application and recruitment process by clicking here. 🔒 Stay Safe from Job Scams All official Airtable communication will come from an @airtable.com email address. We will never ask you to share sensitive information or purchase equipment during the hiring process. If in doubt, contact us at ***************. Learn more about avoiding job scams here.
    $179.5k-221.5k yearly Auto-Apply 17d ago
  • Big Data Consultant (Full-Time Position)

    Sonsoft 3.7company rating

    Data engineer job in Austin, TX

    SonSoft Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. SonSoft Inc is growing at a steady pace specializing in the fields of Software Development, Software Consultancy, and Information Technology Enabled Services. Job Description Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and microservice architecture. Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files. Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala. Strong understanding of Hadoop fundamentals. Must have experience working on Big Data Processing Frameworks and Tools - MapReduce, YARN, Hive, Pig. Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically. Strong understanding of File Formats - Parquet, Hadoop File formats. Proficient with application build and continuous integration tools - Maven, SBT, Jenkins, SVN, Git. Experience in working on Agile and Rally tool is a plus. Strong understanding and hands-on programming/scripting experience skills - UNIX shell, Python, Perl, and JavaScript. Should have worked on large data sets and experience with performance tuning and troubleshooting. Preferred Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus. Knowledge of Design Patterns - Java and/or GOF is a plus. Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus. Experience to Financial domain is preferred Experience and desire to work in a Global delivery environment Qualifications Bachelor's degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education. At least 5 years of Design and development experience in Big data, Java or Data warehousing related technologies. At least 3 years of hands-on design and development experience on Big data related technologies - PIG, Hive, MapReduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting. Should be a strong communicator and be able to work independently with minimum involvement from client SMEs. Should be able to work in team in diverse/ multiple stakeholder environment. Additional Information Connect with me at ******************************************* ( For Direct Clients Requirements ) ** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time. Note:- This is a Full-Time & Permanent job opportunity for you. Only US Citizen, Green Card Holder, GC-EAD, H4-EAD & L2-EAD can apply. No OPT-EAD, H1B & TN candidates, please. Please mention your Visa Status in your email or resume. ** All your information will be kept confidential according to EEO guidelines.
    $76k-105k yearly est. 60d+ ago
  • AWS Data Migration Consultant

    Slalom 4.6company rating

    Data engineer job in Austin, TX

    Candidates can live within commutable distance to any Slalom office in the US. We have a hybrid and flexible environment. Who You'll Work With As a modern technology company, we've never met a technical challenge we didn't like. We enable our clients to learn from their data, create incredible digital experiences, and make the most of new technologies. We blend design, engineering, and analytics expertise to build the future. We surround our technologists with interesting challenges, innovative minds, and emerging technologies. We are seeking an experienced Cloud Data Migration Architect with deep expertise in SQL Server, Oracle, DB2, or a combination of these platforms, to lead the design, migration, and optimization of scalable database solutions in the AWS cloud. This role will focus on modernizing on-premises database systems by architecting high-performance, secure, and reliable AWS-hosted solutions. As a key technical leader, you will work closely with data engineers, cloud architects, and business stakeholders to define data strategies, lead complex database migrations, build out ETL pipelines, and optimize performance across legacy and cloud-native environments. What You'll Do * Design and optimize database solutions on AWS, including Amazon RDS, EC2-hosted instances, and advanced configurations like SQL Server Always On or Oracle RAC (Real Application Clusters). * Lead and execute cloud database migrations using AWS Database Migration Service (DMS), Schema Conversion Tool (SCT), and custom automation tools. * Architect high-performance database schemas, indexing strategies, partitioning models, and query optimization techniques. * Optimize complex SQL queries, stored procedures, functions, and views to ensure performance and scalability in the cloud. * Implement high-availability and disaster recovery (HA/DR) strategies including Always-On, Failover Clusters, Log Shipping, and Replication, tailored to each RDBMS. * Ensure security best practices are followed including IAM-based access control, encryption, and compliance with industry standards. * Collaborate with DevOps teams to implement Infrastructure-as-Code (IaC) using tools like Terraform, CloudFormation, or AWS CDK. * Monitor performance using tools such as AWS CloudWatch, Performance Insights, Query Store, Dynamic Management Views (DMVs), or Oracle-native tools. * Work with software engineers and data teams to integrate cloud databases into enterprise applications and analytics platforms. What You'll Bring * 5+ years of experience in database architecture, design, and administration with at least one of the following: SQL Server, Oracle, or DB2. * Expertise in one or more of the following RDBMS platforms: Microsoft SQL Server, Oracle, DB2. * Hands-on experience with AWS database services (RDS, EC2-hosted databases). * Strong understanding of HA/DR solutions and cloud database design patterns. * Experience with ETL development and data integration, using tools such as SSIS, AWS Glue, or custom solutions. * Familiarity with AWS networking components (VPCs, security groups) and hybrid cloud connectivity. * Strong troubleshooting and analytical skills to resolve complex database and performance issues. * Ability to work independently and lead database modernization initiatives in collaboration with engineering and client stakeholders. Nice to Have * AWS certifications such as AWS Certified Database - Specialty or AWS Certified Solutions Architect - Professional. * Experience with NoSQL databases or hybrid data architectures. * Knowledge of analytics and big data tools (e.g., Snowflake, Redshift, Athena, Power BI, Tableau). * Familiarity with containerization (Docker, Kubernetes) and serverless technologies (AWS Lambda, Fargate). * Experience with DB2 on-premise or cloud-hosted environments. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position, the target base salary pay range in the following locations: Boston, Houston, Los Angeles, Orange County, Seattle, San Diego, Washington DC, New York, New Jersey, for Consultant level is $105,000-147,000 and for Senior Consultant level it is $120,000-$169,000 and for Principal level it is $133,000-$187,000. In all other markets, the target base salary pay range for Consultant level is $96,000-$135,000 and for Senior Consultant level it is $110,000-$155,000 and for Principal level it is $122,000-$172,000. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time. EEO and Accommodations Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process. We will accept applications until 1/31/2026 or until the positions are filled.
    $133k-187k yearly 3d ago
  • Hardware & Gaming Platform Engineer - Validation & Debug

    Advanced Micro Devices, Inc. 4.9company rating

    Data engineer job in Austin, TX

    WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. HARDWARE & GAMING PLATFORM ENGINEER - Silicon Validation & System Debug THE ROLE: Join AMD's Gaming and Hardware Platform Engineering team to lead system, silicon validation, and platform bring-up for next-generation gaming solutions. This role is hands-on, working in the lab with prototype boards, CPUs, GPUs, and memory subsystems to ensure high-performance, reliable gaming experiences. You will collaborate across silicon, firmware, and platform teams to debug complex issues and deliver robust solutions for gamers and developers worldwide. KEY RESPONSIBILITIES: * System & Platform Leadership * Define and own system specifications for gaming platforms, including CPUs, GPUs, memory, I/O adapters, and power management. * Lead cross-functional teams through all phases of hardware development-from design and bring-up to validation and production. * Coordinate with silicon, firmware, and platform teams to ensure feature alignment and timely delivery. * Silicon Debug & Validation * Own post-silicon validation and debug for gaming and workstation boards. * Perform board-level analysis: power sequencing, thermal testing, and signal integrity. * Use advanced lab tools (oscilloscopes, logic analyzers, protocol analyzers) for root-cause analysis. * Develop and execute comprehensive test plans for BIOS, drivers, silicon features, and OS certification. * Drive defect prioritization and resolution plans during SOC bring-up and production phases. * Technical Innovation * Develop scripts and tools to automate hardware validation and improve test efficiency. * Engage in pre-silicon emulation, simulation, and product engineering to ensure readiness for manufacturing and launch. * Customer & Partner Engagement * Collaborate with OEM/ODM partners on hardware integration and co-validation. * Review platform designs against gaming workload requirements and optimize configurations. * Release & Delivery * Sign off on synchronized hardware releases for internal and external delivery. * Oversee test execution and debug leadership across multiple platforms and teams. REQUIRED TECH SKILLS: * Strong experience in silicon bring-up and post-silicon debug. * Hands-on proficiency with lab equipment for electrical and thermal testing. * Deep knowledge of system architecture: x86 CPUs, GPUs, DDR5 memory, PCIe Gen5, and power management. * Familiarity with firmware flashing, BIOS configuration, and hardware validation methodologies. * Programming/scripting skills (Python, C/C++, Perl) for automation. * Understanding of pre-silicon emulation, tapeout processes, and manufacturing test. PREFERRED EXPERIENCE: * Background in gaming platforms, high-performance computing, or workstation hardware. * Expertise in SOC-level validation, IP debug, and performance optimization. * Familiarity with Microsoft and Linux OS, virtualization (VMware, Xen), and certification processes. * Excellent communication and leadership skills for cross-functional collaboration. EDUCATION: * Bachelor's or Master's degree in Electrical Engineering, Computer Engineering, Computer Science, or related discipline. * 10+ years of industry experience, with at least 5 years focused on system or platform engineering and integration for gaming or high-performance hardware. LOCATION: Austin, TX #LI-LM1 #LI-HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here. This posting is for an existing vacancy.
    $86k-118k yearly est. 54d ago
  • Software Data Engineer

    Omni Federal 4.5company rating

    Data engineer job in Austin, TX

    Job Description Job Title: Software Data Engineer Security Clearance: Active DoD Secret Clearance We question. We listen. We adapt. Be honest. Be pragmatic. Omni Federal, a Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, Omni focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Omni Labs and SBIR Innovation centers. Omni has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO. Why Omni? Environment of Autonomy Innovative Commercial Approach People over process We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age. Required Skills: Active DoD Secret Clearance (Required) 4+ years of experience in data science, data engineering, or similar roles. Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow. Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow). Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation. CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant. Nice to Have: Familiarity with SBIR technologies and transformative platform shifts Experience working in Agile or DevSecOps environments 2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin , About Omni Federal Omni Federal is a small business Defense Contractor focused on modern application development & deployment, cloud enablement, data analytics and DevSecOps services for the Federal government. Our past performance is a mix of commercial and federal business that allows us to leverage the latest commercial technologies and processes and adapt them to the Federal government. Omni Federal designs, builds and operates data-rich applications leveraging advanced data modeling, machine learning and data visualization techniques to empower our customers to make better data-driven decisions. We are on the forefront of Modernization and Automation, and are providing our Customers the option through our services to help them get to where they want to be, and ultimately the end-user.
    $83k-112k yearly est. 30d ago
  • Senior Data Engineer

    Icon Mechanical 4.8company rating

    Data engineer job in Austin, TX

    ICON is seeking a Senior Data Engineer to join our Data Intelligence & Systems Architecture (DISA) team. This engineer will play a foundational role in shaping ICON's enterprise data platform within Palantir Foundry, owning the ingestion, modeling, and activation of data that powers reporting, decision-making, and intelligent automation across the company. You will work closely with teams across Supply Chain & Manufacturing, Finance & Accounting, Human Resources, Software, Field Operations and R&D to centralize high-value data sources, model them into scalable assets, and enable business-critical use cases, ranging from real-time reporting to operations-focused AI/ML solutions. This is a highly cross-functional and technical role, ideal for someone with strong data engineering skills, deep business curiosity, and a bias toward action. This role is based at ICON's headquarters in Austin, TX and reports to the Senior Director of Operations. RESPONSIBILITIES: Lead data ingestion and transformation pipelines within Palantir Foundry, integrating data from internal tools, SaaS platforms, and industrial systems Model and maintain high-quality, governed data assets to support use cases in reporting, diagnostics, forecasting, and automation Build analytics frameworks and operational dashboards that give teams real-time visibility into project progress, cost, equipment status, and material flow Partner with business stakeholders and technical teams to translate pain points and questions into scalable data solutions Drive the development of advanced analytics capabilities, including predictive maintenance, proactive purchasing workflows, and operations intelligence Establish best practices for pipeline reliability, versioning, documentation, and testing within Foundry and across the data platform Mentor team members and contribute to a growing culture of excellence in data and systems engineering RESPONSIBILITIES: 8+ years of experience in data engineering, analytics engineering, or backend software development Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or a related technical field. Strong hands-on experience with Palantir Foundry, including Workshop, Code Repositories, Ontologies, and Object Models Proficiency in Python and SQL for pipeline development and data modeling Experience integrating data from APIs, machine data sources, ERP systems, SaaS tools, and cloud storage platforms Strong understanding of data modeling principles, business logic abstraction, and stakeholder collaboration Proven ability to independently design, deploy, and scale data products in fast-paced environments PREFERRED SKILLS AND EXPERIENCE: Experience supporting Manufacturing, Field Operations, or Supply Chain teams with near real-time analytics Familiarity with platforms such as Procore, Coupa, NetSuite, or similar Experience building predictive models or workflow automation in or on top of enterprise platforms Background in data governance, observability, and maintaining production-grade pipelines ICON is an equal opportunity employer committed to fostering an innovative, inclusive, diverse and discrimination-free work environment. Employment with ICON is based on merit, competence, and qualifications. It is our policy to administer all personnel actions, including recruiting, hiring, training, and promoting employees, without regard to race, color, religion, gender, sexual orientation, gender identity, national origin or ancestry, age, disability, marital status, veteran status, or any other legally protected classification in accordance with applicable federal and state laws. Consistent with the obligations of these laws, ICON will make reasonable accommodations for qualified individuals with disabilities. Furthermore, as a federal government contractor, the Company maintains an affirmative action program which furthers its commitment and complies with recordkeeping and reporting requirements under certain federal civil rights laws and regulations, including Executive Order 11246, Section 503 of the Rehabilitation Act of 1973 (as amended) and the Vietnam Era Veterans' Readjustment Assistance Act of 1974 (as amended). Headhunters and recruitment agencies may not submit candidates through this application. ICON does not accept unsolicited headhunter and agency submissions for candidates and will not pay fees to any third-party agency without a prior agreement with ICON. As part of our compliance with these obligations, the Company invites you to voluntarily self-identify as set forth below. Provision of such information is entirely voluntary and a decision to provide or not provide such information will not have any effect on your employment or subject you to any adverse treatment. Any and all information provided will be considered confidential, will be kept separate from your application and/or personnel file, and will only be used in accordance with applicable laws, orders and regulations, including those that require the information to be summarized and reported to the federal government for civil rights enforcement purposes. Internet Applicant Employment Notices
    $82k-114k yearly est. Auto-Apply 32d ago
  • Data Science Engineer

    Intelliswift 4.0company rating

    Data engineer job in Austin, TX

    Data Science Engineer Job ID: 25-12558 Job Title: Data Science Engineer Duration: 8 Months Work type: W2 Payrate: $69.71/HR * Develop and implement end-to-end ML models for fraud detection and risk mitigation, emphasizing high-precision classification. * Work closely with Cyber Fraud Investigations, Data Engineering and AI/ML teams to establish data science frameworks and embed insights into operational processes. * Create and automate dashboards, feature pipelines, and model scoring outputs in collaboration with various stakeholders. Skills: * Possess over 5 years of experience in data science and machine learning. * Demonstrate expertise in ML techniques, SQL, Python/R, and big data frameworks. * Hands-on experience with AI agents and LLMs - Preferred. * Showcase a successful history of leading impactful data science initiatives. * Exhibit strong communication and teamwork abilities. * This role is not suitable for entry-level candidates or interns. Education: * Hold a Bachelor's degree in Statistics, Mathematics, Computer Science, or a related field. Advanced degrees are a plus. * Job details *
    $69.7 hourly 26d ago
  • Delivery Consultant - Data Architect, AWS Professional Services, AWS Professional Services, AWS Professional Services

    Amazon 4.7company rating

    Data engineer job in Austin, TX

    The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. You will lead customer-focused project teams as a technical leader and perform hands-on development of Data & Analytics solutions with exceptional quality. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: - Leading project teams, designing and implementing end-to-end large-scale, complex, scalable, and secure Data Analytics AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective Data & Analytics migration and modernization strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies, ensuring compliance with industry standards and governance while aligning data solutions with business strategies. - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Basic Qualifications - 7+ years of technical specialist, design and architecture experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 7+ years of consulting, design and implementation of serverless distributed solutions experience - 5+ years of software development with object oriented language experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating system experience - 7+ years of external or internal customer facing, complex and large scale project management experience - 5+ years of cloud architecture and solution implementation experience - Bachelor's degree, or 7+ years of professional or military experience Preferred Qualifications - degree in advanced technology, or AWS Professional level certification - Knowledge of AWS services including compute, storage, networking, security, databases, machine learning, and serverless technologies - Knowledge of security and compliance standards including HIPAA and GDPR - Experience in performance optimization and cost management for cloud environments - Experience communicating technical concepts to diverse audiences in pre-sales environments Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $138,200/year in our lowest geographic market up to $239,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************* . This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $138.2k-239k yearly 3d ago
  • Backend Engineer

    Harnham

    Data engineer job in Austin, TX

    About the Role We are looking for a Backend Engineer to design, build, and operate a metrics platform that supports statistical evaluation at scale. You will own core components of the system, including data models, storage layouts, compute pipelines, and developer-facing frameworks for writing and testing metrics. This role involves close collaboration with metric authors and metric consumers across engineering, analytics, and QA, ensuring that metric results are reliable, performant, and easy to use end to end. What You'll Do Own and evolve the metrics platform, including schemas, storage layouts optimized for high-volume writes and fast analytical reads, and clear versioning strategies Build and maintain a framework for writing and running metrics, including interfaces, examples, local execution, and CI compatibility checks Design and implement testing systems for metrics and pipelines, including unit, contract, and regression tests using synthetic and sampled data Operate compute and storage systems in production, with responsibility for monitoring, debugging, stability, and cost awareness Partner with metric authors and stakeholders across development, analytics, and QA to plan changes and roll them out safely What You'll Need Strong experience using Python in production, including asynchronous programming (e.g., asyncio, aiohttp, FastAPI) Advanced SQL skills, including complex joins, window functions, CTEs, and query optimization through execution plan analysis Solid understanding of data structures and algorithms, with the ability to make informed performance trade-offs Experience with databases, especially PostgreSQL (required); experience with ClickHouse is a strong plus Understanding of OLTP vs. OLAP trade-offs and how schema and storage decisions affect performance Experience with workflow orchestration tools such as Airflow (used today), Prefect, Argo, or Dagster Familiarity with data libraries and validation frameworks (NumPy, pandas, Pydantic, or equivalents) Experience building web services (FastAPI, Flask, Django, or similar) Comfort working with containers and orchestration tools like Docker and Kubernetes Experience working with large-scale datasets and data-intensive systems Nice to Have Ability to read and make small changes in C++ code Experience building ML-adjacent metrics or evaluation infrastructure Familiarity with Parquet and object storage layout/partitioning strategies Experience with Kafka or task queues Exposure to basic observability practices (logging, metrics, tracing)
    $71k-98k yearly est. 2d ago

Learn more about data engineer jobs

How much does a data engineer earn in Pflugerville, TX?

The average data engineer in Pflugerville, TX earns between $67,000 and $123,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Pflugerville, TX

$91,000
Job type you want
Full Time
Part Time
Internship
Temporary