Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
All candidates who meet the minimum qualifications for this opportunity will be reviewed after the application period closes on Friday, January 30; Candidates who are selected to interview will be notified by Friday, February 6
What You Will Do:
Many organizations lack a clear view of their data assets, keeping the full value of their data out of reach. Guidehouse delivers end-to-end services, including designing, implementing, and deploying AI solutions and robust data platforms as well as providing advanced analytics and insights. Guidehouse's tailored solutions optimize operations, enhance customer experiences, and drive innovation.
As a Consultant, you will join Guidehouse's AI & Data team - a “horizontal” team dedicated to delivering artificial intelligence, machine learning, and advanced analytics solutions to drive innovation and deliver impactful value across Guidehouse's Public Sector client segments: Defense & Security, Communities, Energy, and Infrastructure, Financial Services, and Health. The AI & Data team has a sub-team within each segment focused on applying cutting-edge technologies and strategies to address the segment's most complex and rapidly evolving challenges across a variety of domains. You'll contribute to high-value initiatives, which may include internal innovation efforts or strategic projects, and gain hands-on experience with modern tools and methodologies. You'll collaborate with experienced professionals, grow your technical capabilities, and help shape data-driven solutions that matter.
Consultants support project teams both on client engagements (on and off-site) and internal projects. Responsibilities will include client and project management, data and information analysis, solution implementation and generation of project deliverables. As a Consultant, a key function of your role will be to support the development and creation of quality deliverables that support essential project workstreams. You will gather and analyze data, identify gaps and trends, and make recommendations related to baseline performance and structure, as well as established best practices and benchmarks.
We encourage career development and hiring for the long term. As a Consultant, you will follow a clearly defined career path and continue to deepen your specialized industry knowledge and consulting skills. As you develop project management skills and leadership abilities, you will have the opportunity to progress to the Senior Consultant level.
What You Will Need:
Minimum Years of Experience: 0 years
Minimum Degree Status: Undergraduate Degree or Graduate Degree (must be enrolled in an accredited undergraduate or graduate degree program through Fall 2025 and graduate by Summer 2026)
Working knowledge of programming languages such as Python, R, SQL.
Willingness to learn new technical skills.
Ability to work collaboratively with other data scientists and adjacent roles.
Ability to adhere to on-site work schedules in the DC metro area as directed.
Ability to work in the United States without sponsorship now or anytime in the future; Students possessing F-1 or J-1 visas are excluded from interview schedules or being hired for this position.
Ability to obtain and maintain a Public Trust, Secret, or higher level of federal/government security clearance (US Citizenship is one of the requirements for security clearance).
What Would Be Nice To Have:
Degree Concentration: Technical field of study relevant to AI/ML and data science, such as Computer Science, Data Science, Machine Learning, Artificial Intelligence, Information Science, Information Technology, etc.
Previous internship or work experience
Experience developing data science, predictive models, and AI solutions using tools such as Python or R.
Experience performing dataengineering and data wrangling using tools such as Python.
Experience performing data visualization using tools such as Power BI or Tableau.
Strong communication and presentation skills for both technical and non-technical audiences.
Ability to write technical process flows, diagrams, and model documentation.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
$72k-94k yearly est. Auto-Apply 1d ago
Looking for a job?
Let Zippia find it for you.
Principal Data Consultant
Cotality
Data engineer job in Dallas, TX
The Principal Data Consultant works in partnership with Product Management, Sales, Customer Support, Order Management, Data Delivery centers, and other cross-functional groups to define and document requirements for standard and custom data products, and to further provide analytical and liaison support to the delivery of sustainable and streamlined data solutions. The **hybrid-remote** PrincipalData Consultant leverages both data and client knowledge to define and align customer requirements to data extract specifications and deliverables while providing support to account executives and product managers regarding the applicability of CoreLogic data to business use cases.* Facilitate discovery of business requirements from client's perspective, translated into specifications which will deliver the right mix of data elements to enable clients to make timely and insightful decisions* Collaborate with clients and internal stakeholders to ensure data needs are well understood* Gather requirements from internal and external stakeholders to write extract specifications that meet client's needs* Define user stories and requirements and conduct research / analysis in support of developing standardized layouts and packages to meet requirements* Write detailed descriptions of user needs (including acceptance criteria) for the development of custom data extracts, or (preferentially) new standard data products* Validate specifications with internal and external stake-holders to ensure they are consistent with articulated business requirements and deliver the expected results* Coordinate issue resolution across multiple teams* Bachelor's degree in Business Administration, Computer Science, relevant discipline, or equivalent experience.* Typically has 10+ years of Business/Process Analyst or directly related experience in the same or related industry* Excellent written and oral communication skills* Ability to translate business requirements into complete and deliverable specifications·* Excellent organizational and project management and facilitation skills. Ability to prioritize and handle multiple concurrent projects* Excellent interpersonal skills for interacting with/influencing cross-functional teams and gaining consensus. Strong listening and question-based knowledge gathering skills* Ability to synthesize and analyze data from a variety of sources, identify issues, draw conclusions, and craft solutions* Experience working as a member of a distributed team. Ability to organize and coordinate with stakeholders across multiple functions and geographic locations* Domain knowledge covering one or more aspects of the real estate economy (mortgage servicing, property insurance, credit, etc)* Experience authoring SQL queries.* Familiarity with diverse coding, profiling, and visualization approaches including Python, Tableau, and Google Cloud* Extensive knowledge of client base, product, and data offerings* Understanding of Big Data, Cloud, Machine Learning approaches and concepts Cotality is fully committed to a work environment that embraces everyone's unique contributions, experiences and values. We offer an empowered work environment that encourages creativity, initiative and professional growth and provides a competitive salary and benefits package. Please navigate to "location" and filter on desired country under the "locations". ex: United States, Canada, United Kingdom, India# We have some exciting news to share with you, CoreLogic will now be called Cotality! Our new brand reflects our commitment to delivering innovative solutions and building deep relationships.Speaking of fresh starts-if you're thinking about making a change, and you're someone who wants to work for a growing company with a people-first culture which has earned us Great Place to Work recognitions in six countries, apply today!Cotality is an Equal Opportunity employer committed to attracting and retaining the best-qualified people available, without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability or status as a veteran of the Armed Forces, or any other basis protected by federal, state or local law. Cotality will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.If you need reasonable accommodation to complete the on-line application, please contact Talent Acquisition at ******************** or **************.Please include the following information in your email:The specific accommodation you're requesting to complete an employment application the position you are applying for (job title and/or job number)********************. Please allow up to three business days to receive a response.Please include the following information in your email:*Your organization's name**Organization's location(s)**Brief description of your need*Cotality is aware of schemes involving fraudulent job postings on third-party employment search sites and/or individual(s) or entities claiming to be employees of Cotality. Those involved are offering fraudulent employment opportunities to applicants, often asking for sensitive personal and financial information. If you believe you have been contacted by anyone misrepresenting themselves as an employee of Cotality or a Cotality recruiter, please contact Cotality at ********************.* Please be advised that all legitimate correspondence from a Cotality employee will come from "@Cotality.com" or "@CoreLogic.com" email accounts.* Cotality will not **interview** candidates via text or email. Our interviews will be conducted by recruiters and leaders via the phone, zoom/teams or in an in-person format.* Cotality will never ask candidates to make any type of personal financial investment related to gaining employment with the Company.Please click to view the Cotality Applicant Privacy Statement.By providing your telephone number, you agree to receive automated (SMS) text messages at that number from Cotality regarding all matters related to your application and, if you are hired, your employment and company business. Message & data rates may apply. You can opt out at any time by responding STOP or UNSUBSCRIBING and will automatically be opted out company-wide.
#J-18808-Ljbffr
$77k-107k yearly est. 5d ago
Data Engineer
Robert Half 4.5
Data engineer job in Dallas, TX
This is a Full Time Direct Hire Permanent Role
No C2C
No Sponsorship
We are seeking DataEngineers to support a critical migration project from SQL Server to Azure Synapse Analytics. This role combines dataengineering, data analysis, and visualization responsibilities to ensure a smooth transition and optimized data architecture.
Key Responsibilities
Design, develop, and implement data pipelines to migrate data from SQL Server to Azure Synapse Analytics.
Collaborate with stakeholders to understand business requirements and translate them into technical solutions.
Optimize data models and queries for performance and scalability in Azure Synapse.
Perform data analysis and validation to ensure accuracy and integrity during migration.
Develop dashboards and reports using Power BI for business insights.
Work with Microsoft Fabric to integrate and manage data workflows.
Ensure compliance with data governance and security standards.
Required Skills & Qualifications
Strong experience with SQL (query optimization, stored procedures).
Proficiency in Python for data processing and automation.
Hands-on experience with Apache Spark for big data processing.
Expertise in Azure Synapse Analytics and familiarity with Microsoft Fabric.
Experience building and maintaining Power BI dashboards.
Solid understanding of data warehousing concepts and ETL processes.
Preferred Qualifications
Experience with cloud-based data architecture and migration projects.
$86k-121k yearly est. 4d ago
Data Scientist
Mastech Digital 4.7
Data engineer job in Irving, TX
Contract Type: W2 Only
Job Title: Data Scientist
We are seeking a mid-level Data Scientist to join our team and help drive personalized customer experiences through customer segmentation and online experimentation. In this role, you will build and deploy machine learning models, design and analyze A/B tests, and translate business objectives into data-driven insights that directly influence marketing and product decisions.
This role is ideal for someone who enjoys combining statistical rigor, applied machine learning, and business impact in a fast-paced, collaborative environment.
Required Skills:
Research, prototype, and develop AI/ML solutions for customer segmentation and personalization.
Design, execute, and analyze online controlled experiments (A/B tests, multivariate tests) to validate hypotheses and measure business impact.
Build, deploy, monitor, and analyze machine learning models in production environments.
Apply statistical experimentation techniques when launching and iterating on AI-driven products.
Partner closely with data, product, marketing, and engineering teams to align solutions with business goals.
Translate complex analytical findings into clear, actionable insights for non-technical stakeholders.
Stay current with emerging tools, frameworks, and best practices in data science and experimentation.
Deliver high-quality work on time with a strong focus on research rigor, innovation, and engineering excellence.
Demonstrate measurable impact through data science applied to marketing and personalization use cases.
Basic Qualifications:
2+ years of experience in statistical data science, feature engineering, and customer segmentation.
2+ years of hands-on experience with Python, SQL, and PySpark.
2+ years of experience training, evaluating, and deploying machine learning models.
2+ years of experience productionizing ML workloads in AWS or Azure.
Familiarity with containerized deployments using Docker and/or Kubernetes.
Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or equivalent practical experience.
Preferred Qualifications:
Experience working with the Databricks platform.
Experience building ML models using large-scale structured and unstructured data.
Exposure to MarTech ecosystems, such as: Customer Data Platforms (CDPs); Data Management Platforms (DMPs); Email Service Providers (ESPs)
Experience integrating data science solutions into marketing and personalization workflows.
$73k-98k yearly est. 5d ago
Data Analytics Engineer
Harnham
Data engineer job in Austin, TX
About the Role
We are looking for a Data Analytics Engineer who sits at the intersection of dataengineering and analytics. In this role, you will transform raw, messy data from vehicles, APIs, and operational systems into clean, reliable datasets that are trusted and widely used-from engineering teams to executive leadership.
You will own data pipelines end to end and build dashboards that surface insights, track performance, and help teams quickly identify issues.
What You'll Do
Build and maintain ETL pipelines that ingest data from diverse internal systems into a centralized analytics warehouse
Work with unique and high-volume datasets, including vehicle telemetry, sensor-derived signals, logistics data, and system test results
Write efficient, well-structured SQL to model and prepare data for analysis and reporting
Design, build, and maintain dashboards (e.g., Grafana or similar) used to monitor system performance and operational health
Partner closely with engineering, operations, and leadership teams to understand data needs and deliver actionable datasets
Explore internal AI- and LLM-based tools to automate analysis and uncover new insights
What You'll Need
Strong hands-on experience with Python and data libraries such as pandas, Polars, or similar
Advanced SQL skills, including complex joins, window functions, and query optimization
Proven experience building and operating ETL pipelines using modern data tooling
Experience with BI and visualization tools (e.g., Grafana, Tableau, Looker)
Familiarity with workflow orchestration tools such as Airflow, Dagster, or Prefect
High-level understanding of LLMs and interest in applying them to data and analytics workflows
Strong ownership mindset and commitment to data quality and reliability
Nice to Have
Experience with ClickHouse or other analytical databases (e.g., Snowflake, BigQuery, Redshift)
Background working with vehicle, sensor, or logistics data
Prior experience in autonomous systems, robotics, or other data-intensive hardware-driven domains
$78k-106k yearly est. 2d ago
GCP Data Engineer/Lead
Dexian
Data engineer job in Irving, TX
Required Qualifications:
9+ years' experience and hands on with Data Warehousing.
9+ years of hands on ETL (e.g., Informatica/DataStage) experience
3+ years of hands-on Big query
3+ years of hands on GCP
9+ years of Teradata hands on experience
9+ years working in a cross-functional environment.
3+ years of hands-on experience with Google Cloud Platform services like Big Query, Dataflow, Pub/Sub, and Cloud Storage
3+ years of hands-on experience building modern data pipelines with GCP platform
3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
3+ years of experience with SQL, NoSQL
3+ years of experience in dataengineering with a focus on microservices-based data solutions
3+ years of containerization (Docker, Kubernetes) and CI/CD for data pipeline
3+ years of experience with Python (or a comparable scripting language)
3+ years of experience with Big data and cloud architecture
3+ years of experience with deployment/scaling of apps on containerized environment (Kubernetes,)
Excellent oral and written communications skills; ability to interact effectively with all levels within the organization.
Working knowledge of AGILE/SDLC methodology
Excellent analytical and problem-solving skills.
Ability to interact and work effectively with technical & non-technical levels within the organization.
Ability to drive clarity of purpose and goals during release and planning activities.
Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail..
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
$76k-103k yearly est. 2d ago
SAP Data Architect
Excelon Solutions 4.5
Data engineer job in Austin, TX
Tittle: SAP Data Architect
Mode: Fulltime
Expectations / Deliverables for the Role
Builds the SAP data foundation by defining how SAP systems store, share, and manage trusted enterprise data.
Produces reference data architectures by leveraging expert input from application, analytics, integration, platform, and security teams. These architectures form the basis for new solutions and enterprise data initiatives.
Enables analytics and AI use cases by ensuring data is consistent, governed, and discoverable.
Leverages SAP Business Data Cloud, Datasphere, MDG and related capabilities to unify data and eliminate duplicate data copies.
Defines and maintains common data model catalogs to create a shared understanding of core business data.
Evolves data governance, ownership, metadata, and lineage standards across the enterprise.
Protects core transactional systems by preventing excessive replication and extraction loads.
Technical Proficiency
Strong knowledge of SAP master and transactional data domains.
Hands-on experience with SAP MDG, Business Data Cloud, BW, Datasphere, or similar platforms.
Expertise in data modeling, metadata management, data quality, and data governance practices.
Understanding of data architectures that support analytics, AI, and regulatory requirements.
Experience integrating SAP data with non-SAP analytics and reporting platforms.
Soft Skills
Ability to align data, and engineering teams around a shared data vision, drive consensus on data standards and decisions
Strong facilitation skills to resolve data ownership and definition conflicts.
Clear communicator who can explain architecture choices, trade-offs, and cost impacts to stakeholders.
Pragmatic mindset focused on value, reuse, and simplification.
Comfortable challenging designs constructively in ARB reviews
$92k-124k yearly est. 4d ago
Azure Data Engineer
GAC Solutions
Data engineer job in Plano, TX
8+ years (with strong hands-on Azure Data Factory and PL/SQL experience)
We are looking for a highly skilled Azure DataEngineer with deep expertise in Azure Data Factory (ADF) and PL/SQL to design, build, optimize enterprise-scale data integration and transformation solutions.
The ideal candidate will work closely with business, analytics, and architecture teams to deliver reliable, scalable, and high-performance data pipelines.
Design, develop, and maintain ADF pipelines for batch and incremental data loads
Implement ETL/ELT workflows using ADF activities such as Copy, Data Flow, Lookup, ForEach, and Web activities
Integrate ADF with Azure Data Lake Storage Gen2, Azure Blob Storage, Azure SQL, Synapse Analytics, and on-prem systems
Develop and optimize PL/SQL stored procedures, packages, functions, and triggers
Write complex SQL queries involving joins, subqueries, analytics functions, and performance tuning
Perform query optimization using indexes, execution plans, and partitioning strategies.
$76k-103k yearly est. 3d ago
Data Engineer
Pyramid Consulting, Inc. 4.1
Data engineer job in Dallas, TX
Immediate need for a talented DataEngineer. This is a 06+ Months Contract opportunity with long-term potential and is located in Dallas, TX (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:26-00480
Pay Range: $40 - $45/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Design, develop, and optimize end-to-end data pipelines using Python and PySpark
Build and maintain ETL/ELT workflows to process structured and semi-structured data
Write complex SQL queries for data transformation, validation, and performance optimization
Develop scalable data solutions using Azure services such as Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, and Databricks
Ensure data quality, reliability, and performance across data platforms
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements
Implement best practices for data governance, security, and compliance
Monitor and troubleshoot data pipeline failures and performance issues
Support production deployments and ongoing enhancements
Key Requirements and Technology Experience:
Must have skills: DataEngineer, Azure, Python, PySpark
Strong proficiency in SQL for querying and data modeling
Hands-on experience with Python for data processing and automation
Solid experience using PySpark for distributed data processing
Experience working with Microsoft Azure data services
Understanding of data warehousing concepts and big data architectures
Experience with batch and/or real-time data processing
Ability to work independently and within cross-functional teams
Experience with Azure Databricks
Knowledge of data modelling techniques (star/snowflake schemas)
Familiarity with CI/CD pipelines and version control tools (Git)
Exposure to data security, access control, and compliance standards
Experience with streaming technologies
Knowledge of DevOps or DataOps practices
Cloud certifications (Azure preferred)
Our client is a leading Pharmaceutical Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
$40-45 hourly 5d ago
Principal Data Architect - Power BI
Dynamics Door
Data engineer job in Houston, TX
We are seeking a Senior Data Architect to play a strategic, business-facing role in shaping how data is used across the organization. This role will be a key partner to Operations, Supply Chain, and Executive Leadership, helping the business fully leverage Power BI and Dynamics 365 F&O to drive insight, decision-making, and performance.
The ideal candidate brings deep Supply Chain Management (SCM) knowledge, strong BI architecture expertise, and the ability to translate business questions into meaningful analytics. This is not a finance-centric role-the focus is on operational excellence, supply chain visibility, and business enablement.
The business teams are relatively new to D365 F&O and Power BI, so the successful candidate will act as a guide, translator, and evangelist, helping users mature their analytical capabilities while setting a scalable BI foundation.
Key Responsibilities
Strategic & Business Partnership
Act as the BI thought leader for the business, bridging the gap between operational teams, IT, and executive leadership
Partner closely with Supply Chain, Manufacturing, Logistics, and Operations leaders to define meaningful KPIs and reporting needs
Support C-suite reporting, turning complex data into clear, executive-level insights and narratives
Help shape the BI roadmap and data strategy, aligned with business priorities and transformation initiatives
Power BI & Analytics Enablement
Lead the design, development, and governance of Power BI solutions across the organization
Establish best practices for data models, semantic layers, dashboards, and self-service BI
Drive adoption by coaching and enabling business users to effectively use Power BI
Translate business questions into intuitive dashboards and actionable insights
D365 F&O & Data Architecture
Design and maintain a robust analytics architecture leveraging Dynamics 365 F&O data
Develop deep understanding of SCM data domains: inventory, procurement, production, warehousing, demand planning, logistics
Ensure data consistency, reliability, and scalability across reporting solutions
Work with IT and partners on data pipelines, integrations, and performance optimization
Leadership & Change Enablement
Serve as a trusted advisor during BI and data-related initiatives
Drive continuous improvement in how data is used to manage operations and performance
Champion a data-driven culture across a business new to modern BI tools
Required Experience & Skills
Core Requirements
8+ years of experience in BI, data architecture, or analytics leadership roles
Deep functional understanding of Supply Chain Management (SCM) in a manufacturing or industrial environment
Strong experience with Power BI (data modeling, DAX, semantic models, governance)
Hands-on experience working with Dynamics 365 Finance & Operations, particularly SCM modules
Proven ability to work directly with senior business stakeholders and executives
Business & Communication Skills
Strong ability to translate business needs into technical BI solutions
Comfortable educating and enabling non-technical users
Executive-level communication and storytelling with data
Pragmatic, business-first mindset (not overly technical or finance-driven)
Nice to Have
Experience with Azure Data Platform (Azure SQL, Synapse, Data Factory, Databricks)
Experience designing enterprise KPI frameworks for operations and supply chain
Background in organizations early in their Power BI or ERP maturity journey
Exposure to data governance, master data, or analytics operating models
$87k-120k yearly est. 2d ago
Staff Data Engineer
Visa 4.5
Data engineer job in Austin, TX
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose - to uplift everyone, everywhere by being the best way to pay and be paid.
Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.
Job Description
Visa Technology & Operations LLC, a Visa Inc. company, needs a Staff DataEngineer (multiple openings) in Austin, TX to:
Design, enhance, and build next generation fraud detection solutions in an agile development environment.
Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with product stakeholders.
Drive development effort end-to-end for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Responsibilities span all phases of solution development.
Collaborate with project team members (Product Managers, Architects, Analysts, Developers, Project Managers, etc.) to ensure development and implementation of new data driven business solutions.
Deliver all code commitments and ensure a complete end-to-end solution that meets and exceeds business expectations.
Assist in scoping and designing analytic data assets, implementing modelled attributes, and contributing to brainstorming sessions.
Build and maintain a robust dataengineering process to develop and implement self-serve data and tools for Visa's data scientists.
Perform other tasks on data governance, system infrastructure, analytics tool evaluation, and other cross-team functions, as needed.
Execute dataengineering projects ranging from small to large either individually or as part of a project team.
Ensure project delivery within timelines and budget requirements.
Effectively communicate status, issues, and risks in a precise and timely manner.
Position reports to the Austin, Texas office and may allow for partial telecommuting.
Qualifications
Basic Qualifications:
Bachelor's degree in Computer Science, Engineering, Data Analytics or related field, followed by 5 years of progressive, post-baccalaureate experience in the job offered or in a dataengineer-related occupation.
Alternatively, a Master's degree in Computer Science, Engineering, Data Analytics or related field and 2 years of experience in the job offered or in a dataengineer-related occupation.
Experience must include:
Creating data driven business solutions and solving data problems using the following technologies: Hadoop, Spark, NoSQL and SQL.
Building ETL/ELT data pipelines, data quality checks and data anomaly detection systems.
Building large scale data processing systems of high availability, low latency, & strong data consistency.
Programming using SQL or Python.
Agile development incorporating continuous integration and continuous delivery.
Additional Information
Worksite: Austin, TX
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.
Travel Requirements: This position does not require travel.
Mental/Physical Requirements:This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is $163,550.00 USD to $210,300.00 USD per year, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.
$163.6k-210.3k yearly 5d ago
Lead Data Engineer
The Friedkin Group 4.8
Data engineer job in Houston, TX
Living Our Values All associates are guided by Our Values. Our Values are the unifying foundation of our companies. We strive to ensure that every decision we make and every action we take demonstrates Our Values. We believe that putting Our Values into practice creates lasting benefits for all of our associates, shareholders, and the communities in which we live.
Why Join Us
Career Growth: Advance your career with opportunities for leadership and personal development.
Culture of Excellence: Be part of a supportive team that values your input and encourages innovation.
Competitive Benefits: Enjoy a comprehensive benefits package that looks after both your professional and personal needs.
Total Rewards
Our Total Rewards package underscores our commitment to recognizing your contributions. We offer a competitive and fair compensation structure that includes base pay and performance-based rewards. Compensation is based on skill set, experience, qualifications, and job-related requirements. Our comprehensive benefits package includes medical, dental, and vision insurance, wellness programs, retirement plans, and generous paid leave. Discover more about what we offer by visiting our Benefits page.
A Day In The Life
As a Lead DataEngineer within the Trailblazer initiative, you will play a crucial role in architecting, implementing, and managing robust, scalable data infrastructure. This position demands a blend of systems engineering, data integration, and data analytics skills to enhance TFG's data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs. The ideal candidate brings deep expertise in Lakehouse design principles, including layered Medallion Architecture patterns (Bronze, Silver, Gold), to drive scalable and governed data solutions. This is also a highly visible leadership role that will represent the dataengineering function and lead the Data Management Community of Practice across TFG.
As a Lead DataEngineer you will:
Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Apache Spark, Hadoop, and Kafka.
Work within cloud environments like AWS or Azure to leverage services including but not limited to EC2, RDS, S3, Lambda, and Azure Data Lake for efficient data handling and processing.
Architect and operationalize data pipelines following Medallion Architecture best practices within a Lakehouse framework-ensuring data quality, lineage, and usability across Bronze, Silver, and Gold layers.
Develop and optimize data models and storage solutions (Databricks, Data Lakehouses) to support operational and analytical applications, ensuring data quality and accessibility.
Utilize ETL tools and frameworks (e.g., Apache Airflow, Fivetran) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.
Lead the Data Management Community of Practice, serving as the primary facilitator, coordinator, and spokesperson. Drive knowledge sharing, establish best practices, and represent the dataengineering discipline across TFG to both technical and business audiences.
Collaborate closely with data scientists, providing the data infrastructure and tools needed for complex analytical models, leveraging Python or R for data processing scripts.
Ensure compliance with data governance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.
Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.
Stay abreast of emerging technologies and methodologies in dataengineering, advocating for and implementing improvements to the data ecosystem.
What We Need From You
Bachelor's Degree in Computer Science, MIS, or other business discipline and 10+ years of experience in dataengineering, with a proven track record in designing and operating large-scale data pipelines and architectures Req or
Master's Degree computer science, MIS, or other business discipline and 5+ years of experience in dataengineering, with a proven track record in designing and operating large-scale data pipelines and architectures Req
Demonstrated experience designing and implementing Medallion Architecture in a Databricks Lakehouse environment, including layer transitions, data quality enforcement, and optimization strategies. Required
Expertise in developing ETL/ELT workflows Required
Comprehensive knowledge of platforms and services like Databricks, Dataiku, and AWS native data offerings Required
Solid experience with big data technologies (Apache Spark, Hadoop, Kafka) and cloud services (AWS, Azure) related to data processing and storage Required
Strong experience in AWS cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks Required
Proficient in SQL and programming languages relevant to dataengineering (Python, Java, Scala) Required
Hands on RDBMS experience (data modeling, analysis, programming, stored procedures) Required
Familiarity with machine learning model deployment and management practices is a plus Required
Strong executive presence and communication skills, with a proven ability to lead communities of practice, deliver presentations to senior leadership, and build alignment across technical and business stakeholders. Required
AWS Certified Solution Architect Preferred
Databricks Certified Associate Developer for Apache Spark Preferred
DAMA CDMP Preferred
or other relevant certifications. Preferred
Physical and Environmental RequirementsThe physical requirements described here are representative of those that must be met by an associate to successfully perform the essential functions of the job. While performing the duties of the job, the associate is required on a daily basis to analyze and interpret data, communicate, and remain in a stationary position for a significant amount of the work day and frequently access, input, and retrieve information from the computer and other office productivity devices. The associate is regularly required to move about the office and around the corporate campus. The associate must frequently move up to 10 pounds and occasionally move up to 25 pounds.
Travel Requirements
20% The associate is occasionally required to travel to other sites, including out-of-state, where applicable, for business.
Join Us
The Friedkin Group and its affiliates are committed to ensuring equal employment opportunities, including providing reasonable accommodations to individuals with disabilities. If you have a disability and would like to request an accommodation, please contact us at . We celebrate diversity and are committed to creating an inclusive environment for all associates.
We are seeking candidates legally authorized to work in the United States, without Sponsorship.
#LI-BM1
$82k-114k yearly est. 5d ago
Data Engineer I
Agtexas Farm Credit Services 3.6
Data engineer job in Lubbock, TX
COMPANY PROFILE:
AgTexas Farm Credit Services serves and supports approximately 2,600 member/borrowers in areas of lending, insurance sales, appraisal, and/or leasing. Eleven office locations can be found throughout the Association's 43-county trade territory, and the association has an average loan volume of approximately $2.85 billion, with total assets equaling $3.2 billion. lending portfolio consists of cotton, livestock, dairy, feed grains, real estate, and ag-related business loans. Additionally, the association territory provides diversity in production and mortgage loans as well as commodities financed. Without strong financial backing farmers and ranchers will not survive, and people will not have food to eat or clothes to wear. AgTexas provides reliable credit and crop insurance to our member-owners, so they can feed and clothe the world.
POSITION:
The DataEngineer I position has a location and salary that is negotiable depending upon experience. Supports the design, development, and maintenance of data pipelines and data infrastructure to ensure reliable data access and analytics across the Association. Contributes to workflow automation, collects, cleans, and integrates data from multiple sources while building and supporting automation platforms.
*AgTexas, at its sole discretion, may offer this position with a different title based upon the qualifications of the candidate.
MINIMUM EDUCATION AND EXPERIENCE:
Bachelor's degree in computer science, information systems, data science, related field or equivalent, practical experience. Possesses a strong proficiency in SQL and a solid understanding of relational database concepts, enabling effective data querying and management. Familiarity with one or more programming languages such as Python, PowerShell (PS), or Java is essential for scripting and automation tasks. Exposure to cloud data platforms, including Azure or AWS, is preferred, as is a working knowledge of ETL processes and data pipeline development. Experience or familiarity with data modeling, data warehousing concepts, or business intelligence (BI) tools is preferred.
KEY REQUIREMENTS:
The ideal candidate has a solid technical foundation, strong problem-solving skills, and the ability to contribute beyond traditional dataengineering duties by supporting both data-driven and process automation initiatives. Strong problem - solving and analytical skills, with the ability to break down complex issues and develop effective solutions. Ability to convey technical concepts clearly to non-technical stakeholders. Proactive in the application of new tools and technologies. Strong interpersonal and excellent oral and written communications skills are required.
KEY RESPONSIBILITES:
Design, build, and maintain data pipelines and integrations to ensure reliable access to data across systems.
Support database design, modeling, and optimization efforts.
Support and enhance workflow automation solutions using platforms such as Nintex and Microsoft Power Platform (Power Automate, Power Apps, Power BI).
Assist in managing cloud-based data platforms (e.g., Azure, AWS, etc.).
Collaborate with analysts and business teams to ensure data availability for reporting and analytics.
Implement data quality checks, validation processes, and documentation for all data flows.
Develop and optimize SQL queries, scripts, API management, and automation processes.
Monitor, troubleshoot, and resolve issues related to data pipelines, workflows, and system integrations.
Contribute to continuous improvement initiatives that leverage both dataengineering and workflow automation.
WORKING RELATIONSHIPS:
Frequent interaction with Association departmental staff and management. Occasional interaction with Association senior management, CEO, and/or board of directors. Occasional interaction with Farm Credit Bank of Texas staff.
EOE/AA/M/F/D/V
AgTexas FCS is an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, disability, national origin, protected veteran status, sexual orientation, gender identity or genetic information.
Persons with disabilities who require an accommodation to complete the application process should call our Lubbock office at and ask to speak to one of our HR representatives to request accommodation in the application process.
$81k-105k yearly est. 3d ago
Lead Data Engineer
Capital One 4.7
Data engineer job in Plano, TX
Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking DataEngineers who are passionate about marrying data with emerging technologies. As a Capital One Lead DataEngineer, you'll have the opportunity to be on the forefront of driving a major transformation within Capital One.
What You'll Do:
Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies
Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems
Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake
Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community
Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment
Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
Basic Qualifications:
Bachelor's Degree
At least 4 years of experience in application development (Internship experience does not apply)
At least 2 years of experience in big data technologies
At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud)
Preferred Qualifications:
7+ years of experience in application development including Python, SQL, Scala, or Java
4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)
4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL)
4+ year experience working on real-time data and streaming applications
4+ years of experience with NoSQL implementation (Mongo, Cassandra)
4+ years of data warehousing experience (Redshift or Snowflake)
4+ years of experience with UNIX/Linux including basic commands and shell scripting
2+ years of experience with Agile engineering practices
At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related support for this position (i.e. H1B, F-1 OPT, F-1 STEM OPT, F-1 CPT, J-1, TN, or another type of work authorization).
The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked.
McLean, VA: $197,300 - $225,100 for Lead DataEngineer
Plano, TX: $179,400 - $204,700 for Lead DataEngineer
Richmond, VA: $179,400 - $204,700 for Lead DataEngineer
Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter.
This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan.
Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
For technical support or questions about Capital One's recruiting process, please send an email to
Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site.
Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
$77k-99k yearly est. 54m ago
Lead Data Engineer
Coca Cola Southwest Beverages 4.4
Data engineer job in Dallas, TX
General Purpose
This role oversees and manages the work of DataEngineer who design and develop ETL pipelines and processes in our Cloud Infrastructure. They are responsible for the overall data integrity and quality, pipeline performance, and cost optimization within our Cloud (Azure) infrastructure.
Duties and Responsibilities
Manages the day-to-day tasks and development work of the DataEngineering team to ensure alignment with objectives and deliverables
Conducts continuous evaluation and improvement on the ETL pipelines created by the DataEngineers to ensure cost effectiveness and performance
Designs and implement automatic data quality reporting, communicating directly with different stakeholders whenever there are deviations
Create and implement technical roadmaps to ensure we have the most up-to-date Cloud infrastructure
Ensure technical alignment with the DataEngineering team from HQ.
Qualifications
Bachelor's degree in computer science, software engineering, information technology, or a related field.
Advanced degree is a plus.
3+ years experience in dataengineering, including hands-on experience in building and maintaining data pipelines and data infrastructure.
3+ years proven experience leading and managing a team of dataengineers, including setting objectives, providing guidance, and assessing performance
A strong foundation in dataengineering principles, including data integration, ETL (Extract, Transform, Load) processes, data warehousing, and data modeling.
Proficiency in programming languages commonly used in dataengineering, such as Python, Java, Scala, or SQL.
Proficiency with big data technologies and platforms, such as Hadoop, Spark, Apache Kafka, and distributed data processing frameworks.
Experience with various database systems, including relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
Proficiency on cloud platforms like Azure, AWS, etc. for data storage and processing.
Ability to lead and manage a team of dataengineers, including setting objectives, providing guidance, and assessing performance.
30% travel projected
Applicants with disabilities may be entitled to reasonable accommodation under the Americans with Disabilities Act and certain Texas or local laws. A reasonable accommodation is a change in the way things are normally done which will ensure an equal employment opportunity without imposing undue hardship on Coca-Cola Southwest Beverages. Please inform us at if you need assistance completing this application or to otherwise participate in the application process.
Know Your Rights dol.gov
Coca-Cola Southwest Beverages LLC is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity and/or expression, status as a veteran, and basis of disability or any other federal, state or local protected class.
$88k-121k yearly est. 2d ago
SR Data Engineer
Alliance Technical Group 4.8
Data engineer job in Houston, TX
Summary/Objective Alliance Technical Group is seeking an experienced Senior DataEngineer to design, build, and optimize the databases and data systems that power our business intelligence and analytics platforms. In this role, you will lead the creation of scalable pipelines, data models, and cloud-based architecture that ensures reliable, high-quality data across the organization. You will architect and implement new database solutions, drive migration and optimization efforts in PostgreSQL and Snowflake, and advance data governance initiatives. Working in an agile, cross-functional environment, you will translate business needs into scalable solutions while ensuring security, integrity, and performance. This role is ideal for someone who thrives on building data platforms from the ground up and shaping enterprise data strategy.
This position is remote but may follow a hybrid schedule for candidates located within 60 miles of an Alliance office.
Essential Functions
Architect and implement new databases and data platforms to meet business and technical requirements
Build and maintain scalable ELT/ETL pipelines and data models for analytics and reporting
Manage and optimize database environments, including provisioning, performance tuning, and access controls
Implement and manage automation and workflow orchestrations for reliable, repeatable deployments
Ensure data quality, consistency, and compliance through robust validation and monitoring practices
Collaborate with cross-functional teams to define technical requirements, estimate development efforts, and deliver business-aligned solutions
Troubleshoot complex issues, recommend improvements, and maintain clear documentation of data systems, processes, and solutions proposed/implemented
Mentor junior dataengineers, providing technical guidance, code reviews, and best practices
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience)
10+ years of hands-on experience with relational databases, including designing and implementing new database systems
Knowledge, Skills, & Abilities
Proven experience designing and building new databases and large-scale data systems, particularly in PostgreSQL and Snowflake; hands-on experience with dbt
Demonstrated ability to develop and optimize ELT/ETL pipelines for complex data sets
Experience with cloud architecture, workflow orchestration, and Git-based version control
Knowledge of data governance, data quality, and validation practices
Strong troubleshooting and root-cause analysis skills
Excellent communication skills and experience working in agile, cross-functional teams
Working knowledge of Python preferred, familiarity with data visualization principles a plus
Work Environment
While performing the duties of this job, the employee regularly works in an office setting with constant sitting and occasional standing.
Physical Demands
Prolonged periods of sitting or standing at a desk while using a computer.
Travel
Not Applicable
Other Duties
Please note this job description is intended to describe the general nature and level of work performed by employees assigned to this position. It is not designed to contain or be interpreted as a comprehensive list of all duties, responsibilities, and qualifications. Additional job-related duties may be assigned. Alliance reserves the right to amend and change responsibilities to meet business and organizational needs as necessary with or without notice.
Employee Benefits: Key Benefits Include:
- Medical, Dental, and Vision Insurance
- Flexible Spending Accounts
- 401(K) Plan with Competitive Match
- Continuing Education and Tuition Assistance
- Employer-Sponsored Disability Benefits
- Life Insurance
- Employee Assistance Program (EAP)
- Paid Time Off (PTO), Paid Holidays, & Bonus Floating Holiday (if hired before July 1st)
- Profit Sharing or Individual Bonus Programs
- Referral Program
- Per Diem & Paid Travel
- Employee Discount Hub
Salary based upon experience.
EEO Commitment: We are an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, national origin, ethnicity, sex, pregnancy, sexual orientation, gender identity/expression, including transgender identity, religion, disability, age, genetics, active military or veteran status and any other characteristics protected under applicable federal or state law.
$82k-111k yearly est. 6d ago
Senior Data Solution Architect
GBIT (Global Bridge Infotech Inc.
Data engineer job in Richardson, TX
Role: Senior Data Solution Architect-FTE
Client: CBRE - Full time Opportunity
Job Details
As a Senior Data Solutions Architect, you'll own the design and implementation of modern data platforms that tackle sophisticated business problems. This role goes beyond traditional dataengineering it's about enabling intelligent automation, semantic understanding, and autonomous decision-making through technologies like Ontology, Knowledge Graphs, NLP, Agentic AI, and MCP.
You'll be the extension business strategy and technical execution, shaping scalable, secure, and intelligent data ecosystems that power sophisticated analytics and AI-driven transformation.
Responsibilities:
Build, develop, and implement modern AI-ready data architectures, integrations, and application solutions that span multiple platforms and technologies.
Compose and operationalize ontologies and knowledge graphs to enable semantic data integration and contextual analytics.
Integrate Agentic AI frameworks to automate multi-step workflows and decision-making.
Own the development of scalable ETL/ELT pipelines, data lakes, and data warehouses.
Develop and support integrations between enterprise systems including data lakes, warehouses, and third-party APIs ensuring performance, reliability, and data consistency.
Apply NLP techniques to extract insights from structured & unstructured data and enable conversational analytics.
Collaborate with collaborators across departments to translate business needs into technical requirements and scalable and secure technical solutions.
Collaborate with other developers, data analysts/engineers, and vendors to deliver multi-functional solutions.
Define and maintain artifacts, including data flow diagrams, integration patterns, and metadata repositories.
Ensure data governance, security, and compliance across all solutions.
Remain up to date in new and emerging technologies
Expectations:
To perform this job successfully, an individual will need to perform each crucial duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Bachelor's or Master's degree in computer science, DataEngineering, or related field.
10+ years of experience with at least 3 years' experience in data architecture, data products
Proven expertise in building Data platforms & Data products
Solid understanding of NLP, LLMs, and advanced analytics workflows
Hands-on experience with Agentic AI systems and frameworks
Substantial experience with data modeling, integration patterns & semantic technologies. Exposure to Ontology & Knowledge Graph is a plus.
An understanding of Master Data Management and Data Governance frameworks
Experience with BI/reporting tools
Deep familiarity with modern data platforms (Snowflake, Databricks, Azure Synapse).
Experience with data modeling, metadata management, and orchestration tools
Excellent communication and collaborator leadership skills.
Experience working in agile environments and leading multi-functional teams.
$93k-125k yearly est. 3d ago
Data Architect
Novocure Inc. 4.6
Data engineer job in Dallas, TX
We are seeking an experienced and innovative Data Architect to lead the design, development, and optimization of our enterprise data architecture. This individual will play a critical role in aligning data strategy with business objectives, ensuring data integrity, and driving value from data across multiple platforms. The ideal candidate will have deep expertise in data architecture best practices and technologies, particularly across SAP S/4 HANA, Veeva CRM, Veeva Vault, SaaS platforms, Operational Data Stores (ODS), and Master Data Management (MDM) platforms.
This is a full-time, position reporting to the Director, Enterprise Architecture
ESSENTIAL DUTIES AND RESPONSIBILITIES:
Design, develop, and maintain scalable and secure enterprise data architecture solutions across SAP S/4 HANA, Veeva CRM, and Veeva Vault environments.
Serve as a subject matter expert for Operational Data Stores and Master Data Management architecture, ensuring clean, consistent, and governed data across the enterprise.
Collaborate with cross-functional teams to identify data needs, establish data governance frameworks, and define data integration strategies.
Develop data models, data flows, and system integration patterns that support enterprise analytics and reporting needs.
Evaluate and recommend new tools, platforms, and methodologies for improving data management capabilities.
Ensure architectural alignment with data privacy, regulatory, and security standards.
Provide leadership and mentoring to dataengineering and analytics teams on best practices in data modeling, metadata management, and data lifecycle management.
Contribute to data governance initiatives by enforcing standards, policies, and procedures for enterprise data.
QUALIFICATIONS/KNOWLEDGE:
Bachelor's or Master's degree in Computer Science, Information Systems, Data Science, or a related field.
Minimum of 8+ years of experience in data architecture, data integration, or enterprise data management roles.
Proven experience in designing and implementing data solutions on SAP S/4 HANA, including integration with other enterprise systems.
Strong hands-on experience with SaaS platforms, including data extraction, modeling, and harmonization.
Deep understanding of Operational Data Stores and MDM design patterns, implementation, and governance practices.
Proficiency in data modeling tools (e.g., Erwin, SAP PowerDesigner), ETL tools (e.g., Business Objects Data Services, SAP Data Services), and integration platforms (e.g., MuleSoft).
Familiarity with cloud data architecture (e.g., AWS, Azure, GCP) and hybrid data environments.
Excellent communication and stakeholder management skills.
OTHER:
Experience with pharmaceutical, life sciences, or regulated industry environments.
Knowledge of data privacy regulations such as GDPR, HIPAA, and data compliance frameworks
Ability to travel domestically and internationally as needed for high priority projects
Novocure is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state, or local law. We actively seek qualified candidates who are protected veteran and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Novocure is committed to providing an interview process that is inclusive of our applicant's needs. If you are an individual with a disability and would like to request an accommodation, please email
ABOUT NOVOCURE:
Our vision
Patient-forward: aspiring to make a difference in cancer.
Our patient-forward mission
Together with our patients, we strive to extend survival in some of the most aggressive forms of cancer by developing and commercializing our innovative therapy.
Our patient-forward values
innovation
focus
drive
courage
trust
empathy
#LI-RJ1
$89k-121k yearly est. 2d ago
Data SDET
Coforge
Data engineer job in Dallas, TX
Data SDET (ETL Tester with Automation)
Key skills: ETL, Automation, Kafka, AWS & Python
Mode: Hybrid - 3 days a Week Onsite
Shift - General
We are seeking a highly skilled Data SDET with expertise in ETL testing, test automation, and strong knowledge of Kafka, AWS, and Python. The ideal candidate will be responsible for designing, developing, and executing automated tests for data pipelines, ensuring data integrity, and validating complex data transformations across distributed systems.
Key Responsibilities:
Develop and maintain automated test frameworks for ETL processes and data pipelines.
Perform ETL testing to validate data extraction, transformation, and loading.
Create Python-based automation scripts for data validation.
Test real-time data streaming using Kafka.
Validate workflows in AWS environments (S3, Glue, Redshift, Lambda).
Collaborate with engineering teams to ensure data quality and reliability.
Integrate testing into CI/CD pipelines for continuous delivery.
Required Skills:
3+ years in ETL testing and data validation.
Strong Python programming skills.
Hands-on experience with Kafka and AWS services.
Knowledge of SQL and relational databases.
Familiarity with test automation frameworks (PyTest, Robot Framework).
Experience with CI/CD tools (Jenkins, Git).
$69k-94k yearly est. 1d ago
Software Engineer, Entry Level (New Grad)
Emonics LLC
Data engineer job in Houston, TX
About the role
We are hiring an Entry Level Software Engineer to join a collaborative engineering team building modern web and backend systems. This is a great opportunity for recent graduates to work on real production features, learn best practices, and grow with mentorship.
What you will do
• Build and enhance backend services and APIs
• Develop UI features and improve user experience (depending on team)
• Write clean, testable code and participate in code reviews
• Troubleshoot issues, fix bugs, and improve system reliability
• Collaborate with product, QA, and other engineers in agile sprints
• Document technical work and contribute to team knowledge bases
What we are looking for
• Bachelor's or Master's degree in Computer Science, Software Engineering, or related field (or equivalent experience)
• Strong fundamentals in data structures, algorithms, and OOP
• Experience with at least one programming language (Java, Python, JavaScript, C#, etc.)
• Familiarity with Git and basic CI/CD concepts
• Comfort working with SQL or basic database concepts
• Strong communication and willingness to learn
Nice to have
• Internship, capstone, or personal projects (GitHub preferred)
• Exposure to cloud platforms (AWS, Azure, GCP)
• Familiarity with Docker, REST, microservices, or React
How much does a data engineer earn in Lubbock, TX?
The average data engineer in Lubbock, TX earns between $67,000 and $121,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Lubbock, TX
$90,000
What are the biggest employers of Data Engineers in Lubbock, TX?
The biggest employers of Data Engineers in Lubbock, TX are: