Sr. Software Development Engineer, Annapurna Labs
Data engineer job in Austin, TX
In this role you will be responsible for leading a technical team which is critical in providing compute sanitization to the Neuron ML accelerators fleet. You will work closely with the hardware and software teams to ensure the right tools are available for identifying defects or faulty states of the hardware before the customer hits an issue. Neuron Compute Sanitizer Tools develops and maintains a pre-check and functional correctness checking suite and provides visibility at the fleet level to understand the trends of hardware/software sanitization.
Key job responsibilities
* Provide technical leadership to the Compute Sanitization team
* Work closely with the hardware and firmware design teams.
* Collect requirements from various other teams including training, inference and runtime.
* Collaborate with the runtime team to ensure timely release of the pre-check tools.
* Anticipate future needs based on the product roadmap and develop necessary tools to sanitize compute.
About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
Inclusive Team Culture
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Mentorship & Career Growth
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
About Amazon Annapurna Labs:
Amazon Annapurna Labs team (our organization within AWS UC) is responsible for building innovation in silicon and software for our AWS customers. We are at the forefront of innovation by combining cloud scale with the world's most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design, software and operations. Because of our teams breadth of talent, we have been able to improve AWS cloud infrastructure in high-performance machine learning with AWS Neuron, Inferentia and Trainium ML chips, in networking and security with products such as AWS Nitro, Enhanced Network Adapter (ENA), and Elastic Fabric Adapter (EFA), and in computing with AWS Graviton and F1 EC2 instances.
About AWS Utility Computing (UC):
AWS Utility Computing (UC) provides product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio.
About AWS
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
About AWS Neuron:
AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it's all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more.
BASIC QUALIFICATIONS- 10+ years of engineering experience
- 10+ years of planning, designing, developing and delivering consumer software experience
- Experience partnering with product or program management teams
- Experience as a tech lead of a large group of engineers
PREFERRED QUALIFICATIONS- Experience designing and developing large scale, high-traffic applications
- Experience with ML hardware/Software
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Mobile Engineering
Data engineer job in Austin, TX
JLL empowers you to shape a brighter way.
Our people at JLL and JLL Technologies are shaping the future of real estate for a better world by combining world class services, advisory and technology for our clients. We are committed to hiring the best, most talented people and empowering them to thrive, grow meaningful careers and to find a place where they belong. Whether you've got deep experience in commercial real estate, skilled trades or technology, or you're looking to apply your relevant experience to a new industry, join our team as we help shape a brighter way forward.
Mobile Engineering - JLL
What this job involves: This position focuses on the hands-on performance of ongoing preventive maintenance and repair work orders across multiple facility locations. You will maintain, operate, and repair building systems including HVAC, electrical, plumbing, and other critical infrastructure components. This mobile role requires you to travel between assigned buildings, conduct facility inspections, respond to emergencies, and ensure all systems operate efficiently to support client occupancy and satisfaction across JLL's building portfolio.
What your day-to-day will look like:
• Perform ongoing preventive maintenance and repair work orders on facility mechanical, electrical and other installed systems, equipment, and components.
• Maintain, operate, and repair all HVAC systems and associated equipment, electrical distribution equipment, plumbing systems, building interior/exterior repair, and related grounds.
• Conduct assigned facility inspections and due diligence efforts, reporting conditions that impact client occupancy and operations.
• Respond effectively to all emergencies and after-hours building activities as required.
• Prepare and submit summary reports to management listing conditions found during assigned work and recommend corrective actions.
• Study and maintain familiarity with building automation systems, fire/life safety systems, and other building-related equipment.
• Maintain compliance with all safety procedures, recognize hazards, and propose elimination methods while adhering to State, County, or City Ordinances, Codes, and Laws.
Required Qualifications:
• Valid state driver's license and Universal CFC Certification.
• Minimum four years of technical experience in all aspects of building engineering with strong background in packaged and split HVAC units, plumbing, and electrical systems.
• Physical ability to lift up to 80 lbs and climb ladders up to 30 ft.
• Ability to read schematics and technical drawings.
• Availability for on-call duties and overtime as required.
• Must pass background, drug/alcohol, and MVR screening process.
Preferred Qualifications:
• Experience with building automation systems and fire/life safety systems.
• Knowledge of CMMS systems such as Corrigo for work order management.
• Strong troubleshooting and problem-solving abilities across multiple building systems.
• Experience working in commercial building environments.
• Commitment to ongoing safety training and professional development.
Location: Mobile position covering Austin, TX and surrounding area.
Work Shift: Standard business hours with on-call availability
#HVACjobs
This position does not provide visa sponsorship. Candidates must be authorized to work in the United States without employer sponsorship.
Location:
On-site -Austin, TX
If this job description resonates with you, we encourage you to apply, even if you don't meet all the requirements. We're interested in getting to know you and what you bring to the table!
Personalized benefits that support personal well-being and growth:
JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. Some of these benefits may include:
401(k) plan with matching company contributions
Comprehensive Medical, Dental & Vision Care
Paid parental leave at 100% of salary
Paid Time Off and Company Holidays
Early access to earned wages through Daily Pay
JLL Privacy Notice
Jones Lang LaSalle (JLL), together with its subsidiaries and affiliates, is a leading global provider of real estate and investment management services. We take our responsibility to protect the personal information provided to us seriously. Generally the personal information we collect from you are for the purposes of processing in connection with JLL's recruitment process. We endeavour to keep your personal information secure with appropriate level of security and keep for as long as we need it for legitimate business or legal reasons. We will then delete it safely and securely.
For more information about how JLL processes your personal data, please view our Candidate Privacy Statement.
For additional details please see our career site pages for each country.
For candidates in the United States, please see a full copy of our Equal Employment Opportunity policy here.
Jones Lang LaSalle (“JLL”) is an Equal Opportunity Employer and is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process - including the online application and/or overall selection process - you may email us at ******************. This email is only to request an accommodation. Please direct any other general recruiting inquiries to our Contact Us page > I want to work for JLL.
Accepting applications on an ongoing basis until candidate identified.
Senior Data Engineer
Data engineer job in Austin, TX
We are looking for a seasoned Azure Data Engineer to design, build, and optimize secure, scalable, and high-performance data solutions within the Microsoft Azure ecosystem. This will be a multi-year contract worked FULLY ONSITE in Austin, TX.
The ideal candidate brings deep technical expertise in data architecture, ETL/ELT engineering, data integration, and governance, along with hands-on experience in MDM, API Management, Lakehouse architectures, and data mesh or data hub frameworks. This position combines strategic architectural planning with practical, hands-on implementation, empowering cross-functional teams to leverage data as a key organizational asset.
Key Responsibilities
1. Data Architecture & Strategy
Design and deploy end-to-end Azure data platforms using Azure Data Lake, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.
Build and implement Lakehouse and medallion (Bronze/Silver/Gold) architectures for scalable and modular data processing.
Define and support data mesh and data hub patterns to promote domain-driven design and federated governance.
Establish standards for conceptual, logical, and physical data modeling across data warehouse and data lake environments.
2. Data Integration & Pipeline Development
Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks for both batch and streaming workloads.
Integrate diverse data sources (on-prem, cloud, SaaS, APIs) into a unified Azure data environment.
Optimize pipelines for cost-effectiveness, performance, and scalability.
3. Master Data Management (MDM) & Data Governance
Implement MDM solutions using Azure-native or third-party platforms (e.g., Profisee, Informatica, Semarchy).
Define and manage data governance, metadata, and data quality frameworks.
Partner with business teams to align data standards and maintain data integrity across domains.
4. API Management & Integration
Build and manage APIs for data access, transformation, and system integration using Azure API Management and Logic Apps.
Design secure, reliable data services for internal and external consumers.
Automate workflows and system integrations using Azure Functions, Logic Apps, and Power Automate.
5. Database & Platform Administration
Perform core DBA tasks, including performance tuning, query optimization, indexing, and backup/recovery for Azure SQL and Synapse.
Monitor and optimize cost, performance, and scalability across Azure data services.
Implement CI/CD and Infrastructure-as-Code (IaC) solutions using Azure DevOps, Terraform, or Bicep.
6. Collaboration & Leadership
Work closely with data scientists, analysts, business stakeholders, and application teams to deliver high-value data solutions.
Mentor junior engineers and define best practices for coding, data modeling, and solution design.
Contribute to enterprise-wide data strategy and roadmap development.
Required Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related fields.
5+ years of hands-on experience in Azure-based data engineering and architecture.
Strong proficiency with the following:
Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake Storage Gen2
SQL, Python, PySpark, PowerShell
Azure API Management and Logic Apps
Solid understanding of data modeling approaches (3NF, dimensional modeling, Data Vault, star/snowflake schemas).
Proven experience with Lakehouse/medallion architectures and data mesh/data hub designs.
Familiarity with MDM concepts, data governance frameworks, and metadata management.
Experience with automation, data-focused CI/CD, and IaC.
Thorough understanding of Azure security, RBAC, Key Vault, and core networking principles.
What We Offer
Competitive compensation and benefits package
Luna Data Solutions, Inc. (LDS) provides equal employment opportunities to all employees. All applicants will be considered for employment. LDS prohibits discrimination and harassment of any type regarding age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and/or disability status.
Senior Data Governance Consultant (Informatica)
Data engineer job in Plano, TX
Senior Data Governance Consultant (Informatica)
About Paradigm - Intelligence Amplified
Paradigm is a strategic consulting firm that turns vision into tangible results. For over 30 years, we've helped Fortune 500 and high-growth organizations accelerate business outcomes across data, cloud, and AI. From strategy through execution, we empower clients to make smarter decisions, move faster, and maximize return on their technology investments. What sets us apart isn't just what we do, it's how we do it. Driven by a clear mission and values rooted in integrity, excellence, and collaboration, we deliver work that creates lasting impact. At Paradigm, your ideas are heard, your growth is prioritized, your contributions make a difference.
Summary:
We are seeking a Senior Data Governance Consultant to lead and enhance data governance capabilities across a financial services organization
The Senior Data Governance Consultant will collaborate closely with business, risk, compliance, technology, and data management teams to define data standards, strengthen data controls, and drive a culture of data accountability and stewardship
The ideal candidate will have deep experience in developing and implementing data governance frameworks, data policies, and control mechanisms that ensure compliance, consistency, and trust in enterprise data assets
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
This position is Remote, with occasional travel to Plano, TX
Responsibilities:
Data Governance Frameworks:
Design, implement, and enhance data governance frameworks aligned with regulatory expectations (e.g., BCBS 239, GDPR, CCPA, DORA) and internal control standards
Policy & Standards Development:
Develop, maintain, and operationalize data policies, standards, and procedures that govern data quality, metadata management, data lineage, and data ownership
Control Design & Implementation:
Define and embed data control frameworks across data lifecycle processes to ensure data integrity, accuracy, completeness, and timeliness
Risk & Compliance Alignment:
Work with risk and compliance teams to identify data-related risks and ensure appropriate mitigation and monitoring controls are in place
Stakeholder Engagement:
Partner with data owners, stewards, and business leaders to promote governance practices and drive adoption of governance tools and processes
Data Quality Management:
Define and monitor data quality metrics and KPIs, establishing escalation and remediation procedures for data quality issues
Metadata & Lineage:
Support metadata and data lineage initiatives to increase transparency and enable traceability across systems and processes
Reporting & Governance Committees:
Prepare materials and reporting for data governance forums, risk committees, and senior management updates
Change Management & Training:
Develop communication and training materials to embed governance culture and ensure consistent understanding across the organization
Required Qualifications:
7+ years of experience in data governance, data management, or data risk roles within financial services (banking, insurance, or asset management preferred)
Strong knowledge of data policy development, data standards, and control frameworks
Proven experience aligning data governance initiatives with regulatory and compliance requirements
Familiarity with Informatica data governance and metadata tools
Excellent communication skills with the ability to influence senior stakeholders and translate technical concepts into business language
Deep understanding of data management principles (DAMA-DMBOK, DCAM, or equivalent frameworks)
Bachelor's or Master's Degree in Information Management, Data Science, Computer Science, Business, or related field
Preferred Qualifications:
Hands-on experience with Informatica, including Master Data Management (MDM) or Informatica Data Management Cloud (IDMC), is preferred
Experience with data risk management or data control testing
Knowledge of financial regulatory frameworks (e.g., Basel, MiFID II, Solvency II, BCBS 239)
Certifications, such as Informatica, CDMP, or DCAM
Background in consulting or large-scale data transformation programs
Key Competencies:
Strategic and analytical thinking
Strong governance and control mindset
Excellent stakeholder and relationship management
Ability to drive organizational change and embed governance culture
Attention to detail with a pragmatic approach
Why Join Paradigm
At Paradigm, integrity drives innovation. You'll collaborate with curious, dedicated teammates, solving complex problems and unlocking immense data value for leading organizations. If you seek a place where your voice is heard, growth is supported, and your work creates lasting business value, you belong at Paradigm.
Learn more at ********************
Policy Disclosure:
Paradigm maintains a strict drug-free workplace policy. All offers of employment are contingent upon successfully passing a standard 5-panel drug screen. Please note that a positive test result for any prohibited substance, including marijuana, will result in disqualification from employment, regardless of state laws permitting its use. This policy applies consistently across all positions and locations.
Applied Data Scientist/ Data Science Engineer
Data engineer job in Austin, TX
Role: Applied Data Scientist/ Data Science Engineer
Yrs. of experience: 8+ Yrs.
Job type : Fulltime
Job Responsibilities:
You will be part of a team that innovates and collaborates with internal stakeholders to deliver world-class solutions with a customer first mentality. This group is passionate about the data science field and is motivated to find opportunity in, and develop solutions for, evolving challenges.
You will:
Solve business and customer issues utilizing AI/ML - Mandatory
Build prototypes and scalable AI/ML solutions that will be integrated into software products
Collaborate with software engineers, business stakeholders and product owners in an Agile environment
Have complete ownership of model outcomes and drive continuous improvement
Essential Requirements:
Strong coding skills in Python and SQL - Mandatory
Machine Learning knowledge (Deep learning, Information Retrieval (RAG), GenAI , Classification, Forecasting, Regression, etc. on large datasets) with experience in ML model deployment
Ability to work with internal stakeholders to transfer business questions into quantitative problem statements
Ability to effectively communicate data science progress to non-technical internal stakeholders
Ability to lead a team of data scientists is a plus
Experience with Big Data technologies and/or software development is a plus
Data Engineer III
Data engineer job in Austin, TX
Data Engineer III Duration: Contract We are seeking a highly skilled and experienced Data Engineer III to join our team in Austin, Texas. The ideal candidate will be responsible for designing, developing, and maintaining data pipelines and systems to support our organization's data needs. This role requires a deep understanding of data engineering principles, strong problem-solving skills, and the ability to work collaboratively in a fast-paced environment.
Responsibilities:
Design, develop, and maintain scalable data pipelines and systems.
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
Optimize and improve data workflows for efficiency and reliability.
Ensure data quality and integrity through robust testing and validation processes.
Monitor and troubleshoot data systems to ensure smooth operations.
Stay updated with the latest trends and technologies in data engineering.
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
Proven experience as a Data Engineer or in a similar role.
Strong proficiency in programming languages such as Python, Java, or Scala.
Experience with big data technologies like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems.
Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.
Excellent problem-solving and analytical skills.
Strong communication and teamwork abilities.
About PTR Global: PTR Global is a leading provider of information technology and workforce solutions. PTR Global has become one of the largest providers in its industry, with over 5000 professionals providing services across the U.S. and Canada. For more information visit *****************
At PTR Global, we understand the importance of your privacy and security. We NEVER ASK job applicants to:
Pay any fee to be considered for, submitted to, or selected for any opportunity.
Purchase any product, service, or gift cards from us or for us as part of an application, interview, or selection process.
Provide sensitive financial information such as credit card numbers or banking information. Successfully placed or hired candidates would only be asked for banking details after accepting an offer from us during our official onboarding processes as part of payroll setup.
Pay Range: $70 - $75
The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision and 401K contributions as well as any other PTO, sick leave, and other benefits mandated by appliable state or localities where you reside or work.
If you receive a suspicious message, email, or phone call claiming to be from PTR Global do not respond or click on any links. Instead, contact us directly at ***************. To report any concerns, please email us at *******************
Data Engineer
Data engineer job in Austin, TX
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Senior Data Engineer (USC AND GC ONLY)
Data engineer job in Richardson, TX
Now Hiring: Senior Data Engineer (GCP / Big Data / ETL)
Duration: 6 Months (Possible Extension)
We're seeking an experienced Senior Data Engineer with deep expertise in Data Warehousing, ETL, Big Data, and modern GCP-based data pipelines. This role is ideal for someone who thrives in cross-functional environments and can architect, optimize, and scale enterprise-level data solutions on the cloud.
Must-Have Skills (Non-Negotiable)
9+ years in Data Engineering & Data Warehousing
9+ years hands-on ETL experience (Informatica, DataStage, etc.)
9+ years working with Teradata
3+ years hands-on GCP and BigQuery
Experience with Dataflow, Pub/Sub, Cloud Storage, and modern GCP data pipelines
Strong background in query optimization, data structures, metadata & workload management
Experience delivering microservices-based data solutions
Proficiency in Big Data & cloud architecture
3+ years with SQL & NoSQL
3+ years with Python or similar scripting languages
3+ years with Docker, Kubernetes, CI/CD for data pipelines
Expertise in deploying & scaling apps in containerized environments (K8s)
Strong communication, analytical thinking, and ability to collaborate across technical & non-technical teams
Familiarity with AGILE/SDLC methodologies
Key Responsibilities
Build, enhance, and optimize modern data pipelines on GCP
Implement scalable ETL frameworks, data structures, and workflow dependency management
Architect and tune BigQuery datasets, queries, and storage layers
Collaborate with cross-functional teams to define data requirements and support business objectives
Lead efforts in containerized deployments, CI/CD integrations, and performance optimization
Drive clarity in project goals, timelines, and deliverables during Agile planning sessions
📩 Interested? Apply now or DM us to explore this opportunity! You can share resumes at ********************* OR Call us on *****************
GCP Data Engineer
Data engineer job in Dallas, TX
MUST BE USC or Green Card; No vendors
GCP Data Engineer/Lead Onsite
Required Qualifications:
9+ years' experience and hands on with Data Warehousing.
9+ years of hands on ETL (e.g., Informatica/DataStage) experience
3+ years of hands-on Big query
3+ years of hands on GCP
9+ years of Teradata hands on experience
9+ years working in a cross-functional environment.
3+ years of hands-on experience with Google Cloud Platform services like Big Query, Dataflow, Pub/Sub, and Cloud Storage
3+ years of hands-on experience building modern data pipelines with GCP platform
3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
3+ years of experience with SQL, NoSQL
3+ years of experience in data engineering with a focus on microservices-based data solutions
3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipeline
3+ years of experience with Python (or a comparable scripting language)
3+ years of experience with Big data and cloud architecture
3+ years of experience with deployment/scaling of apps on containerized environment (Kubernetes,)
Excellent oral and written communications skills; ability to interact effectively with all levels within the organization.
Working knowledge of AGILE/SDLC methodology
Excellent analytical and problem-solving skills.
Ability to interact and work effectively with technical & non-technical levels within the organization.
Ability to drive clarity of purpose and goals during release and planning activities.
Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
NEED ONLY US CITIZENS :: Data Engineer with Databricks and DLT experience
Data engineer job in Houston, TX
Relevant experience to be more than 8-9 years, Strong and proficient in Databricks, DLT (Delta Live Tables) framework and Pyspark, need excellent communication
Thanks
Aatmesh
*************************
AZURE DATA ENGINEER (Databrick certified and DATA FACTORY.)
Data engineer job in Irving, TX
AZURE DATA ENGINEER with DATA FACTORY.
Databrick certified
3 days a week onsite, can be based out of Irving TX or Houston TX.
Rate is 45 W2.
Data Engineer
Data engineer job in Irving, TX
W2 Contract to Hire Role with Monthly Travel to the Dallas Texas area
We are looking for a highly skilled and independent Data Engineer to support our analytics and data science teams, as well as external client data needs. This role involves writing and optimizing complex SQL queries, generating client-specific data extracts, and building scalable ETL pipelines using Azure Data Factory. The ideal candidate will have a strong foundation in data engineering, with a collaborative mindset and the ability to work across teams and systems.
Duties/Responsibilities:Develop and optimize complex SQL queries to support internal analytics and external client data requests.
Generate custom data lists and extracts based on client specifications and business rules.
Design, build, and maintain efficient ETL pipelines using Azure Data Factory.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
Work with Salesforce data; familiarity with SOQL is preferred but not required.
Support Power BI reporting through basic data modeling and integration.
Assist in implementing MLOps practices for model deployment and monitoring.
Use Python for data manipulation, automation, and integration tasks.
Ensure data quality, consistency, and security across all workflows and systems.
Required Skills/Abilities/Attributes:
5+ years of experience in data engineering or a related field.
Strong proficiency in SQL, including query optimization and performance tuning.
Experience with Azure Data Factory, with git repository and pipeline deployment.
Ability to translate client requirements into accurate and timely data outputs.
Working knowledge of Python for data-related tasks.
Strong problem-solving skills and ability to work independently.
Excellent communication and documentation skills.
Preferred Skills/ExperiencePrevious knowledge of building pipelines for ML models.
Extensive experience creating/managing stored procedures and functions in MS SQL Server
2+ years of experience in cloud architecture (Azure, AWS, etc)
Experience with ‘code management' systems (Azure Devops)
2+ years of reporting design and management (PowerBI Preferred)
Ability to influence others through the articulation of ideas, concepts, benefits, etc.
Education and Experience:
Bachelor's degree in a computer science field or applicable business experience.
Minimum 3 years of experience in a Data Engineering role
Healthcare experience preferred.
Physical Requirements:Prolonged periods sitting at a desk and working on a computer.
Ability to lift 20 lbs.
Data Scientist (F2F Interview)
Data engineer job in Dallas, TX
W2 Contract
Dallas, TX (Onsite)
We are seeking an experienced Data Scientist to join our team in Dallas, Texas. The ideal candidate will have a strong foundation in machine learning, data modeling, and statistical analysis, with the ability to transform complex datasets into clear, actionable insights that drive business impact.
Key Responsibilities
Develop, implement, and optimize machine learning models to support business objectives.
Perform exploratory data analysis, feature engineering, and predictive modeling.
Translate analytical findings into meaningful recommendations for technical and non-technical stakeholders.
Collaborate with cross-functional teams to identify data-driven opportunities and improve decision-making.
Build scalable data pipelines and maintain robust analytical workflows.
Communicate insights through reports, dashboards, and data visualizations.
Qualifications
Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field.
Proven experience working with machine learning algorithms and statistical modeling techniques.
Proficiency in Python or R, along with hands-on experience using libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow.
Strong SQL skills and familiarity with relational or NoSQL databases.
Experience with data visualization tools (e.g., Tableau, Power BI, matplotlib).
Excellent problem-solving, communication, and collaboration skills.
Data Engineer(python, Pyspark, data bricks)
Data engineer job in Dallas, TX
Job Title: Data Engineer(python, Pyspark, data bricks)
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role will focus on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency.
Data Engineer - SQL, Python and Pyspark Expert (Onsite - Dallas, TX)
Key Responsibilities
Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
Ensure data quality, consistency, security, and lineage across all stages of data processing
Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP BigQuery)
Document data flows, logic, and transformation rules
Troubleshoot performance and quality issues in batch and real-time pipelines
Support compliance-related reporting (e.g., HMDA, CFPB)
Required Qualifications
6+ years of experience in data engineering or data development
Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
Strong hands-on skills in Python for scripting, data wrangling, and automation
Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
Experience working with mortgage banking data sets and domain knowledge is highly preferred
Strong understanding of data modeling (dimensional, normalized, star schema)
Experience with cloud-based platforms (e.g., Azure Databricks, AWS EMR, GCP Dataproc)
Familiarity with ETL tools, orchestration frameworks (e.g., Airflow, ADF, dbt)
Azure Data Engineer Sr
Data engineer job in Irving, TX
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
GCP Data Engineer
Data engineer job in Fort Worth, TX
Job Title: GCP Data Engineer
Employment Type: W2/CTH
Client: Direct
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and Google Cloud Platform (GCP) services. The ideal candidate will have 6-8 years of hands-on experience in building and maintaining scalable data pipelines, working with APIs, and leveraging GCP tools such as BigQuery, Cloud Composer, and Dataflow.
Core Responsibilities:
• Design, build, and maintain scalable data pipelines to support analytics and business operations.
• Develop and optimize ETL processes for structured and unstructured data.
• Work with BigQuery, Cloud Composer, and other GCP services to manage data workflows.
• Collaborate with data analysts and business teams to ensure data availability and quality.
• Integrate data from multiple sources using APIs and custom scripts.
• Monitor and troubleshoot pipeline performance and reliability.
Technical Skills:
o Strong proficiency in Python and SQL.
o Experience with data pipeline development and ETL frameworks.
• GCP Expertise:
o Hands-on experience with BigQuery, Cloud Composer, and Dataflow.
• Additional Requirements:
o Familiarity with workflow orchestration tools and cloud-based data architecture.
o Strong problem-solving and analytical skills.
o Excellent communication and collaboration abilities.
Data Scientist with Gen Ai and Python experience
Data engineer job in Plano, TX
About Company,
Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction.
Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters.
Here's the job details,
Data Scientist with Gen Ai and Python experience
Plano, TX- 5 days Onsite
18+ Months
Job Overview:
Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models
DS skills with GenAI and LLM Knowledge,
Expertise in Python/Spark and their related libraries and frameworks.
Experience in building training ML pipelines and efforts involved in ML Model deployment.
Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model.
Familiar with DS/ML Production implementation.
Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Azure cloud and Databricks prior knowledge will be a big plus.
Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
Data Engineer
Data engineer job in Dallas, TX
Junior Data Engineer
DESCRIPTION: BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates who are good communicators and self-motivated. You will play a key role in building, maintaining, and operating integrations, reporting pipelines, and data transformation systems.
Qualifications:
Passion for data and a deep desire to learn.
Master's Degree in Computer Science/Information Technology, Data Analytics/Data
Science, or related discipline.
Intermediate Python. Experience in data processing is a plus. (Numpy, Pandas, etc)
Experience with relational databases (SQL Server, Oracle, MySQL, etc.)
Strong written and verbal communication skills.
Ability to work both independently and as part of a team.
Responsibilities:
Collaborate with the analytics team to find reliable data solutions to meet the business needs.
Design and implement scalable ETL or ELT processes to support the business demand for data.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to better enable data flow patterns.
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolve operational & performance issues.
Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Senior Data Engineer
Data engineer job in Houston, TX
Our client is seeking an experienced Data Engineer (5+ years) to join their Big Data and Advanced Analytics team. In this role, you'll collaborate closely with the Data Science team and various business units to tackle real-world challenges in the oil and gas midstream sector using machine learning, AI, and data-driven solutions. You'll also play a key role in shaping and advancing the organization's data engineering practices.
Job Description
Design, build, test, and maintain scalable data pipelines
Independently handle analytics projects across multiple business functions
Automate manual data processes for efficiency and scalability
Develop data-intensive applications and APIs
Create algorithms that turn raw data into actionable insights
Deploy and operationalize machine learning and mathematical models
Support data analysts and data scientists by streamlining data processing and model deployment
Ensure data accuracy and consistency through quality checks
Skills Required
5+ years of professional IT experience, ideally in network security engineering
Strong experience with:
Python (Pandas, NumPy, Pytest, Scikit-Learn)
SQL
Apache Airflow
Kubernetes
CI/CD pipelines
Git version control
Test-Driven Development (TDD)
API development
Familiarity with machine learning concepts and applications
Education/Certifications
High School Diploma or GED
GAS Global Services LLC is an Equal Opportunity Employer. Employment Decision are made without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factors protected by applicable federal, state or local laws.
JOB-10045560
Python Data Engineer- THADC5693417
Data engineer job in Houston, TX
Must Haves:
Strong proficiency in Python; 5+ years' experience.
Expertise in Fast API and microservices architecture and coding
Linking python based apps with sql and nosql db's
Deployments on docker, Kubernetes and monitoring tools
Experience with Automated testing and test-driven development
Git source control, git actions, ci/cd , VS code and copilot
Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases
Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on
Ability to multi task and simultaneously work on multiple projects.
NO CLOUD - they are on prem
Day to Day:
Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.