Data Scientist
Lewistown, PA jobs
Founded over 35 years ago, First Quality is a family-owned company that has grown from a small business in McElhattan, Pennsylvania into a group of companies, employing over 5,000 team members, while maintaining our family values and entrepreneurial spirit. With corporate offices in New York and Pennsylvania and 8 manufacturing campuses across the U.S. and Canada, the companies within the First Quality group produce high-quality personal care and household products for large retailers and healthcare organizations. Our personal care and household product portfolio includes baby diapers, wipes, feminine pads, paper towels, bath tissue, adult incontinence products, laundry detergents, fabric finishers, and dishwash solutions. In addition, we manufacture certain raw materials and components used in the manufacturing of these products, including flexible print and packaging solutions.
Guided by our values of humility, unity, and integrity, we leverage advanced technology and innovation to drive growth and create new opportunities. At First Quality, you'll find a collaborative environment focused on continuous learning, professional development, and our mission to Make Things Better .
We are seeking a Data Scientist for our First Quality facilities located in McElhattan, PA; Lewistown, PA; and Macon, GA.
**Must have manufacturing experience with consumer goods.**
The role will provide meaningful insight on how to improve our current business operations. This position will work closely with domain experts and SMEs to understand the business problem or opportunity and assess the potential of machine learning to enable accelerated performance improvements.
Principle Accountabilities/Responsibilities
Design, build, tune, and deploy divisional AI/ML tools that meet the agreed upon functional and non-functional requirements within the framework established by the Enterprise IT and IS departments.
Perform large scale experimentation to identify hidden relationships between different data sets and engineer new features
Communicate model performance & results & tradeoffs to stake holders
Determine requirements that will be used to train and evolve deep learning models and algorithms
Visualize information and develop engaging dashboards on the results of data analysis.
Build reports and advanced dashboards to tell stories with the data.
Lead, develop and deliver divisional strategies to demonstrate the: what, why and how of delivering AI/ML business outcomes
Build and deploy divisional AI strategy and roadmaps that enable long-term success for the organization that aligned with the Enterprise AI strategy.
Proactively mine data to identify trends and patterns and generate insights for business units and management.
Mentor other stakeholders to grow in their expertise, particularly in AI / ML, and taking an active leadership role in divisional executive forums
Work collaboratively with the business to maximize the probability of success of AI projects and initiatives.
Identify technical areas for improvement and present detailed business cases for improvements or new areas of opportunities.
Qualifications/Education/Experience Requirements
PhD or master's degree in Statistics, Mathematics, Computer Science or other relevant discipline.
5+ years of experience using large scale data to solve problems and answer questions.
Prior experience in the Manufacturing Industry.
Skills/Competencies Requirements
Experience in building and deploying predictive models and scalable data pipelines
Demonstrable experience with common data science toolkits, such as Python, PySpark, R, Weka, NumPy, Pandas, scikit-learn, SpaCy/Gensim/NLTK etc.
Knowledge of data warehousing concepts like ETL, dimensional modeling, and sematic/reporting layer design.
Knowledge of emerging technologies such as columnar and NoSQL databases, predictive analytics, and unstructured data.
Fluency in data science, analytics tools, and a selection of machine learning methods - Clustering, Regression, Decision Trees, Time Series Analysis, Natural Language Processing.
Strong problem solving and decision-making skills
Ability to explain deep technical information to non-technical parties
Demonstrated growth mindset, enthusiastic about learning new technologies quickly and applying the gained knowledge to address business problems.
Strong understanding of data governance/management concepts and practices.
Strong background in systems development, including an understanding of project management methodologies and the development lifecycle.
Proven history managing stakeholder relationships.
Business case development.
What We Offer You
We believe that by continuously improving the quality of our benefits, we can help to raise the quality of life for our team members and their families. At First Quality you will receive:
Competitive base salary and bonus opportunities
Paid time off (three-week minimum)
Medical, dental and vision starting day one
401(k) with employer match
Paid parental leave
Child and family care assistance (dependent care FSA with employer match up to $2500)
Bundle of joy benefit (year's worth of free diapers to all team members with a new baby)
Tuition assistance
Wellness program with savings of up to $4,000 per year on insurance premiums
...and more!
First Quality is committed to protecting information under the care of First Quality Enterprises commensurate with leading industry standards and applicable regulations. As such, First Quality provides at least annual training regarding data privacy and security to employees who, as a result of their role specifications, may come in to contact with sensitive data.
First Quality is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, sexual orientation, gender identification, or protected Veteran status.
For immediate consideration, please go to the Careers section at ********************
to complete our online application.
Sr. Developer eCommerce Systems
Anaheim, CA jobs
Join the Pacsun Community
Co-created in Los Angeles, Pacsun inspires the next generation of youth, building community at the intersection of fashion, music, art and sport. Pacsun is a leading lifestyle brand offering an exclusive collection of the most relevant brands and styles such as adidas, Brandy Melville, Essentials Fear of God, our own brands, and many more.
Our Pacsun community believes in and understands the importance of using our voice, platform, and resources to inspire and bring about positive development. Through our PacCares program, we are committed to our responsibility in using our platform to drive change and take action on the issues important to our community. Join the Pacsun Community.
Learn more here: LinkedIn- Our Community
About the Job:
Pacsun's IT eCommerce team uses AI and innovative technologies to enhance customer experience and improve operational efficiency. As a key member of the team, the Senior eCommerce Developer contributes the architecture, development and optimization of the company's digital commerce experiences.
This role is responsible for both back‑end and front-end development on Salesforce Commerce Cloud (SFCC), ensuring high‑performance, secure and accessible storefronts, with robust system integration in the eCommerce ecosystem. The Senior eCommerce Developer will lead end‑to‑end delivery of new features, mentor junior developers and off-shore team, and collaborate closely with UX, product, QA and business teams to create compelling online experiences that drive revenue and customer loyalty.
This role will work on the full stack of Pacsun's Salesforce Commerce Cloud, mobile app, AI initiatives and system integrations, supporting Commerce, Loyalty, CRM, OMS, and other eCommerce platforms.
A day in the life, what you'll be doing:
Back‑End Development & Integration
Design, build and maintain SFCC server‑side components, including controllers, pipelines, cartridges and custom business logic.
Develop and manage robust APIs that connect SFCC with tax engines, payment processors, fraud management services and the order management system.
Ensure reliable data synchronization between SFCC and external platforms such as CRM, Loyalty, OMS, ERP and analytics systems.
Optimize database models, caching strategies and performance tuning to support high transaction volumes and peak traffic periods.
Checkout & Transaction Optimization
Own the end‑to‑end checkout experience, ensuring seamless, secure and performant workflows from cart to order confirmation.
Integrate payment gateways and fraud protections to deliver accurate pricing and effortless transactions.
Collaborate with UX and product teams to identify friction points in the checkout process and implement improvements that boost conversion and customer satisfaction.
Tax, Shipping & OMS Integration
Implement and maintain integrations with third‑party tax services to handle complex jurisdictional tax rules.
Connect SFCC to shipping providers and fulfillment platforms to provide real‑time shipping options and tracking.
Build and support integrations with the order management system to ensure accurate order routing, inventory updates and status synchronization.
AI & Innovation Support
Partner with data science and innovation teams to embed AI‑driven personalization, recommendation and search solutions into the platform.
Develop integration points for machine‑learning models and real‑time personalization engines, ensuring data security and compliance.
Prototype and implement new technologies that enhance the customer experience and streamline operations.
Technical Leadership & Collaboration
Lead code reviews, define backend architecture standards and mentor less experienced developers on integration patterns and best practices.
Participate in IT management and technical teams to develop and deploy processes to ensure rapid, reliable releases.
Work closely with product, UX, QA and DevOps teams to define requirements, plan sprints and deliver high‑quality software on schedule.
What it takes to Join:
8+ years of experience in web development and at least 5 years focused on Salesforce Commerce Cloud and SFRA.
Deep knowledge of modern front‑end technologies (HTML5, CSS3/SCSS, JavaScript, React or similar frameworks) and back‑end development (Node.js, Java or equivalent).
Hands‑on experience with SFCC OCAPI/SCAPI, cartridge development, API integrations and Business Manager configurations.
Proven track record integrating third‑party services (payments, tax, shipping, CRM, loyalty, analytics) and implementing secure, scalable solutions.
Familiarity with Agile methodologies, version control (Git) and CI/CD pipelines.
Strong understanding of web performance optimization, SEO and accessibility standards.
Ability to lead discussions, mentor teammates and collaborate with technical teams.
Bachelor's degree in Computer Science, Information Systems or related field; Salesforce B2C Commerce Developer certification is preferred.
Salesforce Commerce Cloud SFRA certified developer is preferred.
Proven ability to excel in fast-growing, dynamic business environments with competing priorities, with a positive, solution-oriented mindset.
Excellent analytical and problem-solving skills.
Salary Range: $149,000 - $159,000
Pac Perks:
Dog friendly office environment
On-site Cafe
On-site Gym
$1,000 referral incentive program
Generous associate discount of 30-50% off merchandise online and in-stores
Competitive long term and short-term incentive program
Immediate 100% vested 401K contributions and employer match
Calm Premium access for all employees
Employee perks throughout the year
Physical Requirements:
The physical demands described here are representative of those that are required by an associate to successfully perform the essential functions of this job.
While performing the duties of this job, the associate is regularly required to talk or hear. The associate is frequently required to sit; stand; walk; use hands to finger, handle or feel; as well as reach with hands and arms.
Specific vision abilities required by this job include close vision, distance vision, depth perception and ability to adjust focus.
Ability to work in open environment with fluctuating temperatures and standard lighting.
Ability to work on computer and mobile phone for multiple hours; with frequent interruptions.
Required to travel in elevator or stairwells to attend meetings and engage with associates on multiple floors throughout building.
Hotel, Airplane, and Car Travel may be required.
Position Type/Expected Hours of Work:
This is a full-time position. As a National Retailer, occasional evening and/or weekend work may be required during periods of high volume. This role operates in a professional office environment and routinely uses standard office equipment.
Other Considerations:
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the associate for this job. Duties, responsibilities and activities may change at any time with or without notice. Reasonable accommodations may be made to qualified individuals with disabilities to enable them to perform the essential functions of the role.
Equal Opportunity Employer
This employer is required to notify all applicants of their rights pursuant to federal employment laws.
For further information, please review the Know Your Rights notice from the Department of Labor.
Senior Full Stack Developer
Austin, TX jobs
Clayton Services is searching for a Senior Full Stack Developer to join a thriving company in Austin.
Job Type: Direct Hire
Pay Rate: $130,000-$150,000/year
Benefits: Medical, dental, vision, 401K, PTO, and more.
Senior Full Stack Developer Responsibilities:
Design, develop, and maintain scalable applications focusing on React Native for front-end development, and back-end technologies such as AWS and MySQL
Contribute to the codebase by delivering robust, efficient, and future-proof solutions
Work with MySQL to ensure optimal reliability and performance
Lead the integration of AI tools and techniques to streamline coding, automation, and support tasks
Ensure the organization stays at the forefront of technological advancement
Collaborate with developers, designers, and the leadership team
Continuously uphold high standards for code quality, performance, and maintainability
Other duties as assigned
Senior Full Stack Developer Skills and Abilities:
Ability to work in a fast-paced work environment
Exellent communication skills
Excellent organizational skills
Excellent time and project management skills
Senior Full Stack Developer Education and Experience:
A minimum of a bachelor's degree in computer science, engineering, or a related field is highly preferred
A minimum of five years of full stack development experience
Previous experience delivering production-ready applications
Knowledge and experience working with React Native, AWS, MySQL, Oracle databases, Redux, JavaScript, HTML5, C# .Net Core, LINQ, Entity Framework, and REST Web API
Senior Full Stack Developer - Immediate need. Apply today!
Sr. Software Engineer (NO H1B OR C2C) - Major Entertainment Company
Los Angeles, CA jobs
Senior Software Engineer - Ad Platform Machine Learning
We're looking for a Senior Software Engineer to join our Ad Platform Decisioning & Machine Learning Platform team. Our mission is to power the Company's advertising ecosystem with advanced machine learning, AI-driven decisioning, and high-performance backend systems. We build end-to-end solutions that span machine learning, large-scale data processing, experimentation platforms, and microservices-all to improve ad relevance, performance, and efficiency.
If you're passionate about ML technologies, backend engineering, and solving complex problems in a fast-moving environment, this is an exciting opportunity to make a direct impact on next-generation ad decisioning systems.
What You'll Do
Build next-generation experimentation platforms for ad decisioning and large-scale A/B testing
Develop simulation platforms that apply state-of-the-art ML and optimization techniques to improve ad performance
Design and implement scalable approaches for large-scale data analysis
Work closely with researchers to productize cutting-edge ML innovations
Architect distributed systems with a focus on performance, scalability, and flexibility
Champion engineering best practices including CI/CD, design patterns, automated testing, and strong code quality
Contribute to all phases of the software lifecycle-design, experimentation, implementation, and testing
Partner with product managers, program managers, SDETs, and researchers in a collaborative and innovative environment
Basic Qualifications
4+ years of professional programming and software design experience (Java, Python, Scala, etc.)
Experience building highly available, scalable microservices
Strong understanding of system architecture and application design
Knowledge of big data technologies and large-scale data processing
Passion for understanding the ad business and driving innovation
Enthusiastic about technology and comfortable working across disciplines
Preferred Qualifications
Domain knowledge in digital advertising
Familiarity with AI/ML technologies and common ML tech stacks
Experience with big data and workflow tools such as Airflow or Databricks
Education
Bachelor's degree plus 5+ years of relevant industry experience
Role Scope
You'll support ongoing initiatives across the ad platform, including building new experimentation and simulation systems used for online A/B testing. Media industry experience is not required.
Technical Environment
Java & Spring Boot for backend microservices
AWS as the primary cloud environment
Python & Scala for data pipelines running on Spark and Airflow
Candidates should be strong in either backend microservices or data pipeline development and open to learning the other
API development experience is required
Interview Process
Round 1: Technical & coding evaluation (1 hour)
Round 2: Technical + behavioral interview (1 hour)
Candidates are assessed on technical strength and eagerness to learn.
Staff Data Engineer
Greensboro, NC jobs
Market America, a product brokerage and Internet marketing company that specializes in One-to-One Marketing, is seeking an experienced Staff Data Engineer
for our IT team.
As a senior member of the Data Engineering team, you will have an important role in helping millions of customers on our SHOP.COM and Market America Worldwide multi-country and multi-language global eCommerce websites intelligently find what they want within different categories, merchant offers, products and taxonomy.
We have thousands of 3
rd
party affiliates / feeds which goes through ETL and data ingestion pipelines before ingesting into our search systems. We have multiple orchestration pipelines supporting various types of data for products, store offers, analytics, customer behavioral profiles, segments, logs and much more.
If you are passionate about data engineering in processing millions of data, ETL processes for products, stores, customers, analytics, this is highly visible role that will provide you the opportunity to make a huge impact in our business and a difference to millions of customers worldwide. Data engineering team processes large amounts of data that we import and collect. The team works to enrich content, pricing integration, taxonomy assignments and algorithms, category classifier nodes and machine learning integration within the pipeline.
Key Responsibilities:
Must have minimum of 10-12 years of hands-on development experience implementing batch and events driven applications using Java, Kafka, Spark, Scala, PySpark and Python
Experience with Apache Kafka and Connectors, Java, Springboot in building event driven services, Python in building ML pipelines
Develop data pipelines responsible for ingesting large amounts of different kinds of data from various sources
Help evolve data architecture and work on Next Generation real time pipeline algorithms and architecture in addition to supporting and maintaining current pipelines and legacy systems
Write code and develop worker node for business logic, ETL and orchestration processes
Develop algorithms for better attribution rules and category classifiers
Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive search, discovery, and recommendations.
Work closely with architects, engineers, data analysts, data scientists, contractors/consultants and project managers in assessing project requirements, design, develop and support data ingestions and API services
Work with Data Scientists in building feature engineering pipelines and integrating machine learning models during content enrichment process
Able to influence on priorities working with various partners including engineers, project management office and leadership
Mentor junior team members, define architecture, code review, hands-on development and deliver the work in sprint cycle
Participate in design discussions with Architects and other team members for the design of new systems and re-engineering of components of existing systems
Wear Architect hat when required to bring new ideas to the table, thought leadership and forward thinking
Take holistic approach to building solutions by thinking big picture and overall solution
Work on moving away from legacy systems into next generation architecture
Take complete ownership from requirements, solution design, development, production launch and post launch production support. Participate in code reviews and regular on-call rotations.
Desire to apply best solution in the industry, apply correct design patterns during development and learn best practices and data engineering tools and technologies
Required Skills & Experience:
BS or MS in Computer Science (or related field) with 10+ years of hands-on software development experience working in large-scale data processing pipelines
Must have skills are Apache Spark, Scala and PySpark with 2-4 years of experience building production grade batch pipelines that handle large volumes of data.
Must have at least 4+ years of experience in Java and API / Microservices
Must have at least 2+ years of experience in Python
2+ years of experience in understanding and writing complex SQL and stored procedures for processing raw data, ETL, data validation, using databases such as SQL Server, Redis and other NoSQL DBs
Knowledge of Big data technologies, Hadoop, HDFS
Expertise with building events driven pipelines with Kafka, Java / Spark, Apache Flink
Expertise with Amazon AWS stack such as EMR, EC2, S3
Experience working with APIs to collect and ingest data as well build the APIs for business logic
Experience working with setting up, maintaining, and debugging production systems and infrastructure
Experience in building fault-tolerant and resilient system
Experience in building worker nodes, knowledge of REST principles and data engineering design patterns
In-depth knowledge of Java, Springboot, Spark, Scala, PySpark, Python, Orchestration tools, ESB, SQL, Stored procedures, Docker, RESTful web services, Kubernetes, CI/CD, Observability techniques, Kafka, Release processes, caching strategies, versioning, B&D, BitBucket / Git and AWS Cloud Eco-system, NoSQL Databases, Hazelcast
Strong software development, architecture diagramming, problem-solving and debugging skills
Phenomenal communication and influencing skills
Nice to Have:
Exposure to Machine Learning (ML), LLM models, using AI during coding, build with AI
Knowledge of Elastic APM, ELK stack and search technologies such as Elasticsearch / Solr
Nice to have some experience in workflow orchestration tools such as Air Flow or Apache NiFi
Market America offers competitive salary and generous benefits, including health, dental, vision, life, short and long-term disability insurance, a 401(k) retirement plan with company match, and an on-site health clinic.
Qualified candidates should apply online. This position can work remotely based from either our Greensboro NC or Monterey CA offices. Sorry, we are NOT able to sponsor for this position.
Market America is proud to be an equal opportunity employer.
Market America | SHOP.COM is changing the way people shop and changing the economic paradigm so anyone can become financially independent by creating their own economy and converting their spending into earning with the Shopping Annuity .
ABOUT MARKET AMERICA, INC. & SHOP.COM
Market America Worldwide | SHOP.COM is a global e-commerce and digital marketing company that specializes in one-to-one marketing and is the creator of the Shopping Annuity . Its mission is to provide a robust business system for entrepreneurs, while providing consumers a better way to shop. Headquartered in Greensboro, North Carolina, and with eight sites around the globe, including the U.S., Market America Worldwide was founded in 1992 by Founder, Chairman & CEO JR Ridinger. Through the company's primary, award-winning shopping website, SHOP.COM, consumers have access to millions of products, including Market America Worldwide exclusive brands and thousands of top retail brands. Further, SHOP.COM ranks 19th in Newsweek magazine's 2021 Best Online Shops, No. 52 in Digital Commerce 360's (formerly Internet Retailer) 2021 Top 1,000 Online Marketplaces, No. 79 in Digital Commerce 360's 2021 Top 1,000 Online Retailers and No. 11 in the 2021 Digital Commerce 360 Primary Merchandise Category Top 500. The company is also a two-time winner of the Better Business Bureau's Torch Award for Marketplace Ethics and was ranked No. 15 in The Business North Carolina Top 125 Private Companies for 2021. By combining Market America Worldwide's entrepreneurial business model with SHOP.COM's powerful comparative shopping engine, Cashback program, Hot Deals, ShopBuddy , Express Pay checkout, social shopping integration and countless other features, the company has become the ultimate online shopping destination.
For more information about Market America Worldwide: MarketAmerica.com
For more information on SHOP.COM, please visit: SHOP.COM
Data Engineer, New Venture (Senior to Staff level)
San Francisco, CA jobs
At Sanity.io, we're building the future of AI-powered Content Operations. Our AI Content Operating System gives teams the freedom to model, create, and automate content the way their business works, accelerating digital development and supercharging content operations efficiency. Companies like SKIMS, Figma, Riot Games, Anthropic, COMPLEX, Nordstrom, and Morningbrew are using Sanity to power and automate their content operations.
As part of our new venture, your work will center on addressing one of AI's toughest problems: how to help machines truly understand and use human-created content. You'll build systems that structure and enrich large volumes of information to enable AI agents and LLMs to access the right context at the right time. This means designing and developing tools and pipelines that shape, structure, and connect information and content in innovative ways, and creating new methods to ensure AIs reflect the most accurate, authentic, and up-to-date representation of a business, its brand, products, and knowledge base.
As a Senior Data Engineer you'll architect and optimize the data infrastructure that powers our next generation of AI capabilities. You'll be the engine behind our AI systems, building scalable, efficient data pipelines that process massive volumes of content while maintaining low latency and managing costs intelligently. Your work will directly enable AI agents and LLMs to access the right data at the right time. You'll join a small, cross-functional team where your expertise in data engineering and ML infrastructure will be critical to turning ambitious AI concepts into production-ready systems. If you're passionate about building robust data systems that power cutting-edge AI, obsess over performance optimization, and love solving complex scaling challenges, we'd love to have you on the team.
What you will do:
Design, build, and optimize scalable data pipelines for AI and ML workloads, handling large volumes of structured and unstructured content data.
Architect data processing systems that transform, enrich, and prepare content for LLM consumption, with a focus on latency optimization and cost efficiency.
Build ETL/ELT workflows that extract, transform, and load data from diverse sources to support real-time and batch AI operations.
Implement data quality monitoring and observability systems to ensure pipeline reliability and data accuracy for AI models.
Collaborate with engineers and product teams to understand data requirements and design optimal data architectures that support AI features.
Optimize data storage strategies across data lakes, warehouses, and vector databases to balance performance, cost, and scalability.
Build automated data validation and testing frameworks to maintain data integrity throughout the pipeline.
Stay at the forefront of LLM research, understanding model behaviors, limitations, and capabilities to inform system design decisions.
Monitor and optimize pipeline performance, identifying bottlenecks and implementing solutions to improve throughput and reduce latency.
Create clear documentation of data architectures, pipeline logic, and operational procedures.
About you:
Based in the San Francisco Bay Area and able to work at least 2 days per week in our San Francisco office.
5+ years of data engineering experience, with at least 2 years focused on AI/ML data pipelines or supporting machine learning workloads.
High level of proficiency in Python and SQL.
Strong experience with distributed data processing frameworks like Apache Spark, Dask, or Ray.
Proficiency with GCP and their data services.
Experience with real-time data streaming technologies like Kafka, Redpanda or NATS.
Familiarity with vector databases (e.g., Milvus, ElasticSearch, Vespa) and their role in AI applications.
Experience with data modeling, schema design, and working with both relational and NoSQL databases (PostgreSQL, MongoDB, Cassandra).
Strong focus on performance optimization, cost management, and building systems that scale efficiently.
Experience implementing data observability and monitoring solutions (e.g., Prometheus, ClickHouse).
Ability to write clean, well-documented, maintainable code with proper testing practices.
Excellent problem-solving skills and a data-driven approach to decision making.
Strong communication skills and ability to collaborate effectively with cross-functional teams.
Comfortable with ambiguity and excited about working on undefined problems that require creative solutions.
Familiarity with data pipeline orchestration tools such as Airflow, Dagster, Prefect, or similar frameworks is a nice to have.
What we can offer:
A highly skilled, inspiring, and supportive team.
Positive, flexible, and trust-based work environment that encourages long-term professional and personal growth.
A global, multi-culturally team of colleagues and customers.
Comprehensive health plans and perks.
A healthy work-life balance that accommodates individual and family needs.
Competitive salary and stock options program.
Base Salary Range: $210,000 - $265,000 annually. Final compensation within this range will be determined based on the candidate's experience and skill set.
Who we are:
Sanity.io is a modern, flexible content operating system that replaces rigid legacy content management systems. One of our big differentiators is treating content as data so that it can be stored in a single source of truth, but seamlessly adapted and personalized for any channel without extra effort. Forward-thinking companies choose Sanity because they can create tailored content authoring experiences, customized workflows, and content models that reflect their business.
Sanity recently raised a $85m Series C led by GP Bullhound and is also backed by leading investors like ICONIQ Growth, Threshold Ventures, Heavybit and Shopify, as well as founders of companies like Vercel, WPEngine, Twitter, Mux, Netlify and Heroku. This funding round has put Sanity in a strong position for accelerated growth in the coming years.
You can only build a great company with a great culture. Sanity is a 200+ person company with highly committed and ambitious people. We are pioneers, we exist for our customers, we are hel ved, and we love type two fun! Read more about our values here!
Sanity.io
pledges to be an organization that reflects the globally diverse audience that our product serves. We believe that in addition to hiring the best talent, a diversity of perspectives, ideas, and cultures leads to the creation of better products and services. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity.
Auto-ApplyETL / Informatica Architect
Palo Alto, CA jobs
Required Skills: • 8+ years of overall Informatica ETL development experience • At least 2 years experience as Informatica ETL Architect • Mastery in data warehousing concepts. Candidate should be able to clearly communicate fundamental concepts during the interview and demonstrate previous experience in all aspects.
• Ability to develop a technical work plan and assign work and coordinate across multiple developers and projects.
• Lead technical requirements, technical and data architectures for the data warehouse projects
• Ensure compliance of meta data standards for the data warehouse
• Strong data analysis, data modeling and ETL Architect Skills
• Able to configure the dev, test, production environments and promotion processes
• Platform is Oracle 11g , Informatica 9.0.1
• Strong business and communication skills
Additional Information
Duration:
3+ Months (strong possibility of extension)
Hire Type: Contract, C2C or 1099
Rate: DOE
Visa: H1, GC or USC only!
Travel Covered: No.
Prefer Local.
Apply today!
ETL / Informatica Architect
Palo Alto, CA jobs
Required Skills: • 8+ years of overall Informatica ETL development experience • At least 2 years experience as Informatica ETL Architect • Mastery in data warehousing concepts. Candidate should be able to clearly communicate fundamental concepts during the interview and demonstrate previous experience in all aspects.
• Ability to develop a technical work plan and assign work and coordinate across multiple developers and projects.
• Lead technical requirements, technical and data architectures for the data warehouse projects
• Ensure compliance of meta data standards for the data warehouse
• Strong data analysis, data modeling and ETL Architect Skills
• Able to configure the dev, test, production environments and promotion processes
• Platform is Oracle 11g , Informatica 9.0.1
• Strong business and communication skills
Additional Information
Duration:
3+ Months (strong possibility of extension)
Hire Type: Contract, C2C or 1099
Rate: DOE
Visa: H1, GC or USC only!
Travel Covered: No. Prefer Local.
Apply today!
Snowflake Data Engineer
New York jobs
We believe that difference sparks brilliance, so we welcome people and ideas from everywhere to join us in stretching what's possible.
At Tapestry, being true to yourself is core to who we are. When each of us brings our individuality to our collective ambition, our creativity is unleashed. This global house of brands - Coach, Kate Spade New York, Stuart Weitzman - was built by unconventional entrepreneurs and unexpected solutions, so when we say we believe in dreams, we mean we believe in making them happen. We're always on a journey to becoming our best, but you can count on this: Here, your voice is valued, your ambitions are supported, and your work is recognized.
A member of the Tapestry family, we are part of a global house of brands that has unwavering optimism and is committed to being innovative and wholly inclusive. Visit Our People page to learn more about Tapestry's commitment to equity, inclusion, and diversity.
Primary Purpose: The ideal candidate is an experienced Data Engineer with a strong background in Snowflake, SQL, and cloud-based data solutions. They are a self-motivated, independent problem-solver who is eager to learn new skills and adapt to changing technologies. Collaboration, performance optimization, and a commitment to maintaining data security and compliance are critical for success in this role.
The successful individual will leverage their proficiency in Data Engineering to...
Develop and manage data models, pipelines, and transformations.
Demonstrate proficiency in SQL and BASH scripting.
Leverage 5+ years of experience in data engineering or related roles.
Collaborate with Data Engineering, Product Engineering, and Product Management teams to align with the product roadmap.
Effectively document and communicate technical solutions to diverse audiences.
Demonstrate a strong ability to work independently, take ownership of tasks and drive them to completion.
Show a proactive approach to learning new technologies, tools and skills as needed.
The accomplished individual will...
Design, implement, and manage Snowflake Data Warehouse solutions.
Be proficient in creating and optimizing Snowflake schema design.
Optimize Snowflake schema designs for performance and scalability.
Utilize Snowflake features such as data sharing, cloning and time travel.
Analyze and optimize Snowflake compute resources (e.g. virtual warehouses).
Optimize queries, indexes and storage for improved performance.
Maintain data security and ensure compliance with regulations. Knowledge with SOC 2 compliance standards (preferred).
Integrate Snowflake with AWS services, including S3 and Glue (preferred).
An outstanding professional will have...
Bachelor's degree in Computer Science, Information Systems, or a related field.
Snowflake certification (preferred).
AWS certification (preferred).
Experience with Agile methodologies and modern software development lifecycle.
Familiarity with Python and/or Java for Snowflake-related automation (preferred).
Familiarity with Node.js for API development (preferred).
Our Competencies for All Employees
Courage: Doesn't hold back anything that needs to be said; provides current, direct, complete, and “actionable” positive and corrective feedback to others; lets people know where they stand; faces up to people problems on any person or situation (not including direct reports) quickly and directly; is not afraid to take negative action when necessary.
Creativity: Comes up with a lot of new and unique ideas; easily makes connections among previously unrelated notions; tends to be seen as original and value-added in brainstorming settings.
Customer Focus: Is dedicated to meeting the expectations and requirements of internal and external customers; gets first-hand customer information and uses it for improvements in products and services; acts with customers in mind; establishes and maintains effective relationships with customers and gains their trust and respect.
Dealing with Ambiguity: Can effectively cope with change; can shift gears comfortably; can decide and act without having the total picture; isn't upset when things are up in the air; doesn't have to finish things before moving on; can comfortably handle risk and uncertainty.
Drive for Results: Can be counted on to exceed goals successfully; is constantly and consistently one of the top performers; very bottom-line oriented; steadfastly pushes self and others for results.
Interpersonal Savvy: Relates well to all kinds of people, up, down, and sideways, inside and outside the organization; builds appropriate rapport; builds constructive and effective relationships; uses diplomacy and tact; can diffuse even high-tension situations comfortably.
Learning on the Fly: Learns quickly when facing new problems; a relentless and versatile learner; open to change; analyzes both successes and failures for clues to improvement; experiments and will try anything to find solutions; enjoys the challenge of unfamiliar tasks; quickly grasps the essence and the underlying structure of anything.
Our Competencies for All People Managers
Strategic Agility: Sees ahead clearly; can anticipate future consequences and trends accurately; has broad knowledge and perspective; is future oriented; can articulately paint credible pictures and visions of possibilities and likelihoods; can create competitive and breakthrough strategies and plans.
Developing Direct Reports and Others: Provides challenging and stretching tasks and assignments; holds frequent development discussions; is aware of each person's career goals; constructs compelling development plans and executes them; pushes people to accept developmental moves; will take on those who need help and further development; cooperates with the developmental system in the organization; is a people builder.
Building Effective Teams: Blends people into teams when needed; creates strong morale
and spirit in his/her team; shares wins and successes; fosters open dialogue; lets people finish and be responsible for their work; defines success in terms of the whole team; creates a feeling of belonging in the team.
Tapestry, Inc. is an equal opportunity and affirmative action employer and we pride ourselves on hiring and developing the best people. All employment decisions (including recruitment, hiring, promotion, compensation, transfer, training, discipline and termination) are based on the applicant's or employee's qualifications as they relate to the requirements of the position under consideration. These decisions are made without regard to age, sex, sexual orientation, gender identity, genetic characteristics, race, color, creed, religion, ethnicity, national origin, alienage, citizenship, disability, marital status, military status, pregnancy, or any other legally-recognized protected basis prohibited by applicable law.
Americans with Disabilities Act (ADA)
Tapestry, Inc. will provide applicants and employees with reasonable accommodation for disabilities or religious beliefs. If you require reasonable accommodation to complete the application process, please contact Tapestry People Services at ************** or ****************************** LI-HYBRID
Visit Tapestry, Inc. at ************************
Work Setup Hybrid Flex (in office 1 to 3 days a week)
BASE PAY RANGE $140,000.00 TO $170,000.00 Annually
Click Here - U.S Corporate Compensation & Benefit
Data Engineer - Kafka
Salisbury, NC jobs
Ahold Delhaize USA, a division of global food retailer Ahold Delhaize, is part of the U.S. family of brands, which includes five leading omnichannel grocery brands - Food Lion, Giant Food, The GIANT Company, Hannaford and Stop & Shop. Our associates support the brands with a wide range of services, including Finance, Legal, Sustainability, Commercial, Digital and E-commerce, Technology and more.
Primary Purpose:
The Data Engineer II contributes to the expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. They will contribute to our data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They engage through the entire lifecycle of a project from data mapping, data pipelines, data modeling, and finally data consumption. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. They will learn to optimize or even re-design our company's data architecture to support our next generation of products and data initiatives. They can take on smaller projects from start to finish, work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors and trace issues to their source. They develop solutions to a variety of problems of moderate scope and complexity.
Our flexible/ hybrid work schedule includes 3 in-person days at one of our core locations and 2 remote days. Our core office locations include Salisbury, NC, Chicago, IL, and Quincy, MA.
Applicants must be currently authorized to work in the United States on a full-time basis.
Duties & Responsibilities:
* Solves simple to moderate application errors, resolves application problems, following up promptly with all appropriate customers and IT personnel.
* Reviews and contributes to QA test plans and supports QA team during test execution.
* Participates in developing streaming data applications (Kafka), data transformation and data pipelines.
* Ensures change control and change management procedures are followed within the program/project as they relate to requirements.
* Able to interpret requirement documents, contributes to creating functional design documents as a part of data development life cycle.
* Documents all phases of work including gathering requirements, architectural diagrams, and other program technical specifications using current specified design standards for new or revised solutions.
* Relates information from various sources to draw logical conclusions.
* Conducts unit testing on data streams.
* Conducts data lineage and impact analysis as a part of the change management process.
* Conducts data analysis (SQL, Excel, Data Discovery, etc.) on legacy systems and new data sources.
* Creates source to target data mappings for data pipelines and integration activities.
* Assists in identifying the impact of proposed application development/enhancements projects.
* Performs data profiling and process analysis to understand key source systems and uses knowledge of application features and functions to assess scope and impact of business needs.
* Implement and maintain data governance policies and procedures to ensure data quality, security and compliance
* Ensure operational stability of a 24/7/365 grocery retail environment by providing technical support, system monitoring, and issue resolution which may be required during off-hours, weekends, and holidays as needed.
Qualifications:
* Bachelors Degree in Computer Science or Technical field; equivalent trainings/certifications/experience equivalency will be considered
* 3 or more years of equivalent experience in relevant job or field of technology
Preferred Qualifications:
* Masters Degree in relevant field of study preferred, Additional trainings or certifications in relevant field of study preferred
* Experience in Agile teams and or Product/Platform based operating model.
* Experience in retail or grocery preferred.
* Experience with Kafka.
#DICEJobs #LI-hybrid #LI-SS1
Salary Range: $101,360 - $152,040
Actual compensation offered to a candidate may vary based on their unique qualifications and experience, internal equity, and market conditions. Final compensation decisions will be made in accordance with company policies and applicable laws.
At Ahold Delhaize USA, we provide services to one of the largest portfolios of grocery companies in the nation, and we're actively seeking top talent.
Our team shares a common motivation to drive change, take ownership and enable our brands to better care for their customers. We thrive on supporting great local grocery brands and their strategies.
Our associates are the heartbeat of our organization. We are committed to offering a welcoming work environment where all associates can succeed and thrive. Guided by our values of courage, care, teamwork, integrity (and even a little humor), we are dedicated to being a great place to work.
We believe in collaboration, curiosity, and continuous learning in all that we think, create and do. While building a culture where personal and professional growth are just as important as business growth, we invest in our people, empowering them to learn, grow and deliver at all levels of the business.
Sr. Data Engineer
Saint Louis, MO jobs
Sr. Data Engineer
Sr. Data Engineer for St. Louis, MO to lead design & implementation of data pipeline architecture; identify data solutions to meet business capability needs & processes; develop & maintain continuous integration processes and infrastructure; facilitate end-to-end pipeline orchestration; write, maintain & review software code; develop architecture & deployment strategy to automate data aggregation; mentor & coach junior team members; drive improvements to internal processes; lead discussions with product management stakeholders; apply agile principles to data engineering projects. Requires Master's in Applied Computer Science, Computer Information Systems, or closely-related field & 2 yrs experience designing logical & physical models for datasets; building semantic data layers using fit-for-purpose methodologies, including linked data concept and/or resource description framework; working with NoSQL databases, including Google BigQuery for data warehousing; using object-oriented programming languages, including Unix Shell scripting, Python, Spark, and Scala; extracting & transforming information from relational databases, including MySQL and Oracle, to load into cloud-based data lakes and/or warehouses; using Google Cloud Storage & Google Data Proc for data pipeline orchestration; using SQL to create aggregated tables and materialized & virtualized views; using GitHub and/or bitbucket for version control, branching & merging code; and using Jira, AhA and/or Azure DevOps for project management. Will also accept Bachelor's in said fields & 5 yrs progressive, post-Bachelor's stated experience. Telecommuting permitted from home office location anywhere in the U.S. Salary Range: Employees can expect to be paid a salary between $142,000.00 to $160,000.00. Additional compensation may include a bonus or commission (if relevant). Additional benefits include health care, vision, dental, retirement, PTO, sick leave, etc. The offered salary may vary within this range based on an applicant's location, market data/ranges, an applicant's skills and prior relevant experience, certain degrees and certifications, and other relevant factors. Mail resume to Cascinda Fischbeck, Bayer Research and Development Services LLC, 800 N. Lindbergh Blvd., E2NE, St. Louis, MO 63167 or email resume to careers_************. Include reference code below with resume.
Bayer Research & Development Services LLC is an Equal Opportunity Employer/Disabled/Veterans
Bayer Research & Development Services LLC is committed to providing access and reasonable accommodations in its application process for individuals with disabilities and encourages applicants with disabilities to request any needed accommodation(s) using the contact information below.
If you meet the requirements of this unique opportunity, and want to impact our mission Science for a better life, we encourage you to apply now. Job postings will remain open for a minimum of ten business days and are subject to immediate closure thereafter without additional notice.
Division:
Crop Science
Reference Code
858546
Functional Area:
Biotech
Location:
St. Louis, MO
Employment Type:
Regular
Position Grade:
Contact Us
Address
Telephone
Creve Coeur, MO
63167
OR
careers_************
Easy ApplyData Engineer (Senior to Staff level)
Atlanta, GA jobs
At Sanity, we're building the future of AI-powered Content Operations. Our AI Content Operating System gives teams the freedom to model, create, and automate content the way their business works, accelerating digital development and supercharging content operations efficiency. Companies like Linear, Figma, Cursor, Riot Games, Anthropic, and Morningbrew are using Sanity to power and automate their content operations.
About the role
We are seeking a talented and experienced Data Engineer (Senior to Staff level) to join our growing data team at a pivotal time in our development. As a key member of our data engineering team, you'll help scale and evolve our data infrastructure to ensure Sanity can make better data-driven decisions.
This is an opportunity to work on mission-critical data systems that power our B2B SaaS platform. You'll improve our data pipelines, optimize data models, and strengthen our analytics capabilities using modern tools like Airflow, AirByte, BigQuery, DBT, and RudderStack. Working closely with engineers, analysts, and business stakeholders across US and European time zones, you'll help foster a data-driven culture by making data more accessible, reliable, and actionable across the organization.
If you're passionate about solving complex data challenges, have experience scaling data infrastructure in B2B environments, and want to make a significant impact at a fast-growing company, we want to talk to you. This role offers the perfect blend of technical depth and strategic influence, allowing you to shape how Sanity leverages data to drive business success.
What you'll be doing
Data Infrastructure & ETL Development
Design, develop, and maintain scalable ETL/ELT pipelines to ensure data is efficiently processed, transformed, and made available across the company.
Collaborate with engineering teams to implement and scale product telemetry across our product surfaces.
Develop and maintain data models in BigQuery that balance performance, cost, and usability
Establish best practices for data ingestion, transformation, and orchestration, ensuring reliability and efficiency.
Orchestrate data workflows to reduce manual effort, improve efficiency, and maintain high data quality standards.
Collaboration & Cross-Team Partnerships
Work closely with data analysts, engineers, and other internal stakeholders to understand their data needs and design robust pipelines that support data-driven decision-making.
Build scalable and flexible data solutions that address both current business requirements and future growth needs.
Partner with engineering, growth, and product teams to enhance data accessibility and usability.
Data Observability & Reliability
Build and maintain comprehensive monitoring, alerting, and logging systems for all data pipelines and infrastructure
Implement SLAs/SLOs for critical data pipelines and establish incident response procedures
Develop data quality monitoring that proactively detects anomalies, schema changes, and data freshness issues
Create dashboards and alerting systems that provide visibility into pipeline health, data lineage, and system performance
Debug and troubleshoot data issues efficiently using observability tools and data lineage tracking
Continuous Improvement & Scalability
Monitor and optimize data pipeline performance and costs as data volumes grow
Implement and maintain data quality frameworks and testing practices
Contribute to the evolution of our data infrastructure through careful evaluation of new tools and technologies
Help establish data engineering best practices that scale with our growing business needs
This may be you
Remote in Europe or North America (East Coast/ET)
4+ years of experience building data pipelines at scale, with deep expertise in SQL, Python, and Node.js/TypeScript for data engineering workflows
Proactive mindset with attention to detail, particularly in maintaining comprehensive documentation and data lineage
Strong communication skills with demonstrated ability to collaborate effectively across US and European time zones
Production experience with workflow orchestration tools like Airflow, and customer data platforms like RudderStack, ideally in a B2B SaaS environment
Proven experience integrating and maintaining data flows with CRM systems like Salesforce, Marketo, or HubSpot
Track record of building reliable data infrastructure that supports rapid business growth and evolving analytics needs
Experience implementing data quality frameworks and monitoring systems to ensure reliable data delivery to stakeholders
Nice to have:
Experience with product analytics tools like Amplitude, Mixpanel, or PostHog
Experience with Google Cloud Platform and BigQuery
What we can offer
A highly-skilled, inspiring, and supportive team
Positive, flexible, and trust-based work environment that encourages long-term professional and personal growth
A global, multi-culturally diverse group of colleagues and customers
Comprehensive health plans and perks
A healthy work-life balance that accommodates individual and family needs
Competitive stock options program and location-based salary
Who we are
Sanity.io is a modern, flexible content operating system that replaces rigid legacy content management systems. One of our big differentiators is treating content as data so that it can be stored in a single source of truth, but seamlessly adapted and personalized for any channel without extra effort. Forward-thinking companies choose Sanity because they can create tailored content authoring experiences, customized workflows, and content models that reflect their business.
Sanity recently raised a $85m Series C led by GP Bullhound and is also backed by leading investors like ICONIQ Growth, Threshold Ventures, Heavybit and Shopify, as well as founders of companies like Vercel, WPEngine, Twitter, Mux, Netlify and Heroku. This funding round has put Sanity in a strong position for accelerated growth in the coming years.
You can only build a great company with a great culture. Sanity is a 200+ person company with highly committed and ambitious people. We are pioneers, we exist for our customers, we are hel ved, and we love type two fun! Read more about our values here!
Sanity.io
pledges to be an organization that reflects the globally diverse audience that our product serves. We believe that in addition to hiring the best talent, a diversity of perspectives, ideas, and cultures leads to the creation of better products and services. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, or gender identity.
Auto-ApplyLead Data Engineer
Chicago, IL jobs
At Bayer we're visionaries, driven to solve the world's toughest challenges and striving for a world where 'Health for all Hunger for none' is no longer a dream, but a real possibility. We're doing it with energy, curiosity and sheer dedication, always learning from unique perspectives of those around us, expanding our thinking, growing our capabilities and redefining ‘impossible'. There are so many reasons to join us. If you're hungry to build a varied and meaningful career in a community of brilliant and diverse minds to make a real difference, there's only one choice.
Lead Data Engineer
Lead Data Engineer for Chicago, IL to develop models & create infrastructure to test automation and digital products; collaborate with research groups to understand models & products; develop data solutions to meet automation & scaling needs; evaluate limitations of existing data pipelines & provide recommendations to address unmet needs; ensure availability of data architectures & computational infrastructure to support development and testing of machine and deep learning models; create data-driven tools, templates, packages, visualizations & dashboards to improve model development and insights; use data science tools & programming languages to write robust, well-documented code and extract and manipulate data; lead definition of project goals & milestones; mentor junior team members. Requires Master's in Data Science, Analytics, C.S. or closely-related field & 3 yrs experience in IT-related position(s): engineering data-intensive software using streaming and resource-based design principles; programming in Python to build data pipelines, packages & model frameworks; developing analytics tools & data visualizations using Python with Numpy, Pandas, MatPlotLib, Jinja and/or Seaborn; using Cloud native technologies, including Apache Kafka, AWS SQS, Apache Spark, AWS Lambda, AWS Step Functions, Amazon ECS, Amazon Athena, Google Cloud Platform, Google Cloud Functions, and/or Kubernetes, to process data at scale & deliver data pipelines; automating & scaling data processes and frameworks using Jinja2, JSON, and/or YAML; applying object oriented programming concepts to datasets; designing platform-agnostic architecture using abstraction & modularity design principles; using Unittest & Pytest to unit test packages and models; running ad-hoc & scheduled jobs with Shell scripting and/or CRON; applying Agile project management principles in JIRA; and creating & maintaining code, documentation and trainings on Git, Sharepoint and/or Confluence. Up to 5% of U.S. travel req'd. Telecommuting permitted from home office location within reasonable commuting distance of Chicago, IL up to 4 days per week. Salary Range: Employees can expect to be paid a salary between $138,000.00 to $155,000.00. Additional compensation may include a bonus or commission (if relevant). Additional benefits include health care, vision, dental, retirement, PTO, sick leave, etc. The offered salary may vary within this range based on an applicant's location, market data/ranges, an applicant's skills and prior relevant experience, certain degrees and certifications, and other relevant factors. Mail resume to Jill Martin, Bayer Research and Development Services, LLC, 800 N. Lindbergh Blvd., E2NE, St. Louis, MO 63167 or email resume to careers_************. Include reference code below with resume.
YOUR APPLICATION
Bayer offers a wide variety of competitive compensation and benefits programs. If you meet the requirements of this unique opportunity, and want to impact our mission Science for a better life, we encourage you to apply now. Be part of something bigger. Be you. Be Bayer.
To all recruitment agencies: Bayer does not accept unsolicited third party resumes.
Bayer is an Equal Opportunity Employer/Disabled/Veterans
Bayer is committed to providing access and reasonable accommodations in its application process for individuals with disabilities and encourages applicants with disabilities to request any needed accommodation(s) using the contact information below.
Bayer is an E-Verify Employer.
Location:
United States : Illinois : Chicago
Division:
Enabling Functions
Reference Code:
852905
Contact Us
Email:
careers_************
Easy ApplyData Engineer
McLean, VA jobs
Range is creating AI-powered solutions to eliminate financial complexity for our members. We're transforming wealth management through the perfect blend of cutting-edge technology and human expertise. We're obsessed with member experience! We've built an integrated platform that tackles the full spectrum of financial needs-investments, taxes, retirement planning, and estate management-all unified in one intuitive system.
Backed by Scale, Google's Gradient Ventures, and Cathay Innovations, we're in hyper-growth mode and looking for exceptional talent to join our starting lineup. We recently raised $60M in our Series C funding and want builders to help scale the company. Every Ranger at this stage is shaping our culture and way of life-from former CEOs and startup founders to experts from leading hedge funds and tech companies.
If you're ready to build something that truly matters in financial services, bring your talent to Range. Here, you'll make a genuine impact on how people manage their financial lives while working alongside a team that celebrates wins, makes big decisions, and blazes new trails together.
About the role
As a Data Engineer at Range, you'll play a central role in building and scaling the data infrastructure that powers our analytics, product insights, and customer experiences. You will design, develop, and maintain robust data pipelines and platforms that ensure data is accurate, secure, and available for both real-time and batch analytics. You'll collaborate closely with product, analytics, data science, and engineering teams to turn raw data into reliable, scalable information that drives business decisions and fuels growth. This role is ideal for someone who thrives on solving technical data challenges in a fast-moving fintech environment.
We're excited to hire this role at Range's Headquarters in McLean, VA. All of our positions follow an in-office schedule Monday through Friday, allowing you to collaborate directly with your team. If you're not currently based in the area, but love what you see, let's discuss relocation as part of your journey to joining us.
What you'll do with us
Create, maintain, and optimize scalable data pipelines and ETL/ELT workflows to support analytics and product initiatives.
Integrate data from various internal and external sources, ensuring seamless ingestion, transformation, and delivery.
Help shape our data architecture, including data warehouse/schema design, storage solutions, and workflow orchestration.
Optimize queries and storage performance for efficiency at scale, especially as data volume grows with our customer base.
Implement data quality checks, monitoring systems, and troubleshooting tools to ensure data reliability and accuracy.
Work closely with data scientists, analysts, and cross-functional partners to understand data needs and deliver timely solutions.
Maintain clear documentation for pipelines, architecture decisions, and standards to support team alignment and onboarding.
What will set you apart
5+ years of experience in Data Engineering
BS or BA in Computer Science, Engineering, Statistics, Economics, Mathematics, or another quantitative discipline from a top-tier university
Strong proficiency in Python and SQL for building and optimizing data workflows.
Hands-on experience with cloud data platforms and toolchains (e.g., AWS, GCP, Snowflake, BigQuery, Redshift).
Familiarity with data pipeline orchestration tools such as Airflow, dbt, Prefect, or similar.
Experience with data modeling, schema design, and data warehousing concepts in large-scale environments.
Strong analytical mindset with excellent problem-solving skills, especially in ambiguous or evolving environments.
Comfortable working collaboratively across teams and translating business needs into technical solutions.
Experience working in fintech or other consumer-focused, high-growth technology startup environments.
Benefits
Health & Wellness: 100% employer-covered medical insurance for employees (75% for dependents), plus dental and vision coverage
401(k): Retirement savings program to support your future
Paid Time Off: Dedicated time to reset and recharge plus most federal holidays
Parental Leave: Comprehensive leave policy for growing families
Meals: Select meals covered throughout the week
Fitness: Monthly movement stipend
Equity & Career Growth: Early exercise eligibility and a strong focus on professional development
Annual Compensation Reviews: Salary and equity refreshes based on performance
Boomerang Program: After two years at Range, you can take time away to start your own company. We'll hold your spot for 6 months - and pause your equity vesting, which resumes if you return
Range is proud to be an equal opportunity workplace. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. As a company, we are committed to designing products, building a culture, and supporting a team that reflects the diverse population we serve.
Auto-ApplyData Engineer - Kafka
Quincy, MA jobs
Ahold Delhaize USA, a division of global food retailer Ahold Delhaize, is part of the U.S. family of brands, which includes five leading omnichannel grocery brands - Food Lion, Giant Food, The GIANT Company, Hannaford and Stop & Shop. Our associates support the brands with a wide range of services, including Finance, Legal, Sustainability, Commercial, Digital and E-commerce, Technology and more.
Primary Purpose:
The Data Engineer II contributes to the expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. They will contribute to our data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They engage through the entire lifecycle of a project from data mapping, data pipelines, data modeling, and finally data consumption. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. They will learn to optimize or even re-design our company's data architecture to support our next generation of products and data initiatives. They can take on smaller projects from start to finish, work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors and trace issues to their source. They develop solutions to a variety of problems of moderate scope and complexity.
Our flexible/ hybrid work schedule includes 3 in-person days at one of our core locations and 2 remote days. Our core office locations include Salisbury, NC, Chicago, IL, and Quincy, MA.
Applicants must be currently authorized to work in the United States on a full-time basis.
Duties & Responsibilities:
* Solves simple to moderate application errors, resolves application problems, following up promptly with all appropriate customers and IT personnel.
* Reviews and contributes to QA test plans and supports QA team during test execution.
* Participates in developing streaming data applications (Kafka), data transformation and data pipelines.
* Ensures change control and change management procedures are followed within the program/project as they relate to requirements.
* Able to interpret requirement documents, contributes to creating functional design documents as a part of data development life cycle.
* Documents all phases of work including gathering requirements, architectural diagrams, and other program technical specifications using current specified design standards for new or revised solutions.
* Relates information from various sources to draw logical conclusions.
* Conducts unit testing on data streams.
* Conducts data lineage and impact analysis as a part of the change management process.
* Conducts data analysis (SQL, Excel, Data Discovery, etc.) on legacy systems and new data sources.
* Creates source to target data mappings for data pipelines and integration activities.
* Assists in identifying the impact of proposed application development/enhancements projects.
* Performs data profiling and process analysis to understand key source systems and uses knowledge of application features and functions to assess scope and impact of business needs.
* Implement and maintain data governance policies and procedures to ensure data quality, security and compliance
* Ensure operational stability of a 24/7/365 grocery retail environment by providing technical support, system monitoring, and issue resolution which may be required during off-hours, weekends, and holidays as needed.
Qualifications:
* Bachelors Degree in Computer Science or Technical field; equivalent trainings/certifications/experience equivalency will be considered
* 3 or more years of equivalent experience in relevant job or field of technology
Preferred Qualifications:
* Masters Degree in relevant field of study preferred, Additional trainings or certifications in relevant field of study preferred
* Experience in Agile teams and or Product/Platform based operating model.
* Experience in retail or grocery preferred.
* Experience with Kafka.
#DICEJobs #LI-hybrid #LI-SS1
Salary Range: $101,360 - $152,040
Actual compensation offered to a candidate may vary based on their unique qualifications and experience, internal equity, and market conditions. Final compensation decisions will be made in accordance with company policies and applicable laws.
At Ahold Delhaize USA, we provide services to one of the largest portfolios of grocery companies in the nation, and we're actively seeking top talent.
Our team shares a common motivation to drive change, take ownership and enable our brands to better care for their customers. We thrive on supporting great local grocery brands and their strategies.
Our associates are the heartbeat of our organization. We are committed to offering a welcoming work environment where all associates can succeed and thrive. Guided by our values of courage, care, teamwork, integrity (and even a little humor), we are dedicated to being a great place to work.
We believe in collaboration, curiosity, and continuous learning in all that we think, create and do. While building a culture where personal and professional growth are just as important as business growth, we invest in our people, empowering them to learn, grow and deliver at all levels of the business.
Data Engineer - Kafka
Chicago, IL jobs
Ahold Delhaize USA, a division of global food retailer Ahold Delhaize, is part of the U.S. family of brands, which includes five leading omnichannel grocery brands - Food Lion, Giant Food, The GIANT Company, Hannaford and Stop & Shop. Our associates support the brands with a wide range of services, including Finance, Legal, Sustainability, Commercial, Digital and E-commerce, Technology and more.
Primary Purpose:
The Data Engineer II contributes to the expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. They will contribute to our data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They engage through the entire lifecycle of a project from data mapping, data pipelines, data modeling, and finally data consumption. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. They will learn to optimize or even re-design our company's data architecture to support our next generation of products and data initiatives. They can take on smaller projects from start to finish, work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors and trace issues to their source. They develop solutions to a variety of problems of moderate scope and complexity.
Our flexible/ hybrid work schedule includes 3 in-person days at one of our core locations and 2 remote days. Our core office locations include Salisbury, NC, Chicago, IL, and Quincy, MA.
Applicants must be currently authorized to work in the United States on a full-time basis.
Duties & Responsibilities:
* Solves simple to moderate application errors, resolves application problems, following up promptly with all appropriate customers and IT personnel.
* Reviews and contributes to QA test plans and supports QA team during test execution.
* Participates in developing streaming data applications (Kafka), data transformation and data pipelines.
* Ensures change control and change management procedures are followed within the program/project as they relate to requirements.
* Able to interpret requirement documents, contributes to creating functional design documents as a part of data development life cycle.
* Documents all phases of work including gathering requirements, architectural diagrams, and other program technical specifications using current specified design standards for new or revised solutions.
* Relates information from various sources to draw logical conclusions.
* Conducts unit testing on data streams.
* Conducts data lineage and impact analysis as a part of the change management process.
* Conducts data analysis (SQL, Excel, Data Discovery, etc.) on legacy systems and new data sources.
* Creates source to target data mappings for data pipelines and integration activities.
* Assists in identifying the impact of proposed application development/enhancements projects.
* Performs data profiling and process analysis to understand key source systems and uses knowledge of application features and functions to assess scope and impact of business needs.
* Implement and maintain data governance policies and procedures to ensure data quality, security and compliance
* Ensure operational stability of a 24/7/365 grocery retail environment by providing technical support, system monitoring, and issue resolution which may be required during off-hours, weekends, and holidays as needed.
Qualifications:
* Bachelors Degree in Computer Science or Technical field; equivalent trainings/certifications/experience equivalency will be considered
* 3 or more years of equivalent experience in relevant job or field of technology
Preferred Qualifications:
* Masters Degree in relevant field of study preferred, Additional trainings or certifications in relevant field of study preferred
* Experience in Agile teams and or Product/Platform based operating model.
* Experience in retail or grocery preferred.
* Experience with Kafka.
#DICEJobs #LI-hybrid #LI-SS1
Salary Range: $101,360 - $152,040
Actual compensation offered to a candidate may vary based on their unique qualifications and experience, internal equity, and market conditions. Final compensation decisions will be made in accordance with company policies and applicable laws.
At Ahold Delhaize USA, we provide services to one of the largest portfolios of grocery companies in the nation, and we're actively seeking top talent.
Our team shares a common motivation to drive change, take ownership and enable our brands to better care for their customers. We thrive on supporting great local grocery brands and their strategies.
Our associates are the heartbeat of our organization. We are committed to offering a welcoming work environment where all associates can succeed and thrive. Guided by our values of courage, care, teamwork, integrity (and even a little humor), we are dedicated to being a great place to work.
We believe in collaboration, curiosity, and continuous learning in all that we think, create and do. While building a culture where personal and professional growth are just as important as business growth, we invest in our people, empowering them to learn, grow and deliver at all levels of the business.
Retail Data Engineer
Atlanta, GA jobs
The Retail Data Engineer plays a critical role in managing the flow, transformation, and integrity of scan-level data within our retail data ecosystem. This role ensures that raw transactional data such as point-of-sale (POS), promotional, loyalty, and product data is clean, consistent, and fit for analysis. This individual will collaborate closely with merchandising, marketing, operations, and analytics teams to deliver trusted data that powers key business decisions in a dynamic retail environment.
What You'll Do:
Works with business teams (e.g., Category Management, Marketing, Supply Chain) to define and refine data needs.
Identifies gaps or ambiguities in retail scan data (e.g., barcode inconsistencies, vendor mappings).
Translates complex retail requirements into technical specifications for data ingestion and transformation.
Develops, schedules, and optimizes ETL/ELT processes to ingest large volumes of scan data (e.g., from POS, ERP, loyalty programs).
Applies robust transformation logic to normalize data across vendors, stores, and systems.
Works with both structured and semi-structured retail datasets (CSV, JSON, EDI, etc.).
Implements data validation, reconciliation, and anomaly detection for incoming retail data feeds.
Designs and maintains audit trails and data lineage for scan data.
Investigates and resolves data discrepancies in collaboration with store systems, IT, and vendors.
Conducts exploratory data analysis to uncover trends, seasonality, anomalies, and root causes.
Supports retail performance reporting, promotional effectiveness, and vendor analytics.
Provides clear documentation and logic traceability for analysts and business users.
Collaborates with cross-functional teams such as merchandising, inventory, loyalty, and finance to support retail KPIs and data insights.
Acts as a data subject matter expert for scan and transaction data.
Provides guidance on best practices for data usage and transformation in retail contexts.
What We're Looking For:
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
3-5 years of experience in data engineering, ideally in a retail environment.
Experience working with scan-level data from large retail chains or CPG vendors.
Familiarity with retail ERP systems (e.g., SAP, Oracle), merchandising tools, and vendor data feeds.
Expert-level SQL and experience working with retail schema structures (e.g., SKUs, UPCs, store IDs).
Proficient in data pipeline and orchestration tools such as dbt, Airflow, Fivetran, or Apache Spark.
Experience with cloud-based data platforms (Snowflake, Google BigQuery, Azure Synapse, AWS Redshift).
Familiarity with retail concepts such as POS systems, promotional pricing, markdowns, units vs. dollars sold, sell-through rates, and planogram compliance.
Understanding of master data management (MDM) for products, stores, and vendors.
Experience with data profiling and quality frameworks (e.g., Great Expectations, Soda, Monte Carlo).
Responsibilities:
Works with business teams (e.g., Category Management, Marketing, Supply Chain) to define and refine data needs.
Identifies gaps or ambiguities in retail scan data (e.g., barcode inconsistencies, vendor mappings).
Translates complex retail requirements into technical specifications for data ingestion and transformation.
Develops, schedules, and optimizes ETL/ELT processes to ingest large volumes of scan data (e.g., from POS, ERP, loyalty programs).
Applies robust transformation logic to normalize data across vendors, stores, and systems.
Works with both structured and semi-structured retail datasets (CSV, JSON, EDI, etc.).
Implements data validation, reconciliation, and anomaly detection for incoming retail data feeds.
Designs and maintains audit trails and data lineage for scan data.
Investigates and resolves data discrepancies in collaboration with store systems, IT, and vendors.
Conducts exploratory data analysis to uncover trends, seasonality, anomalies, and root causes.
Supports retail performance reporting, promotional effectiveness, and vendor analytics.
Provides clear documentation and logic traceability for analysts and business users.
Collaborates with cross-functional teams such as merchandising, inventory, loyalty, and finance to support retail KPIs and data insights.
Acts as a data subject matter expert for scan and transaction data.
Provides guidance on best practices for data usage and transformation in retail contexts.
Qualifications:
All qualified applicants will receive consideration for employment with RaceTrac without regard to their race, national origin, religion, age, color, sex, sexual orientation, gender identity, disability, or protected veteran status, or any other characteristic protected by local, state, or federal laws, rules, or regulations.
Auto-ApplyStaff Data Engineer
Greensboro, NC jobs
Market America, a product brokerage and Internet marketing company that specializes in One-to-One Marketing, is seeking an experienced Staff Data Engineer
for our IT team.
Data Engineer (AI) Co-op - Fall 2026
Chicago, IL jobs
Ahold Delhaize USA, a division of global food retailer Ahold Delhaize, is part of the U.S. family of brands, which includes five leading omnichannel grocery brands - Food Lion, Giant Food, The GIANT Company, Hannaford and Stop & Shop. Our associates support the brands with a wide range of services, including Finance, Legal, Sustainability, Commercial, Digital and E-commerce, Technology and more.
Co-op Program Overview:
Get an insider view of the fast-changing grocery retail industry while developing relevant business, technical and leadership skills geared towards enhancing your career. This paid Co-op experience is an opportunity to help drive business results in an environment designed to promote and reward diversity, innovation and leadership. Our mission is to create impactful early talent programs that provide cohorts with meaningful project work, learning and development sessions, and mentorship opportunities.
Applicants must be currently enrolled in a bachelor's or master's degree program. Applicants must be currently authorized to work in the United States on a full-time basis and be available from July 13, 2026 through December 4, 2026. We have a hybrid work environment that requires a minimum of three days a week in the office. Please submit your resume including your cumulative GPA. Transcripts may be requested at a future date.
* Approximate 6-month Co-op session with competitive pay
* Impactful project work to develop your skills/knowledge
* Career assistance & mentoring in obtaining full time positions within ADUSA
* Leadership speaker sessions and development activities
* One-on-one mentoring in your area of interest
* Involvement in group community service events
* Networking and professional engagement opportunities
* Access to online career development tools and resources
* Opportunity to present project work to company leaders and gain executive visibility
Department/Position Description:
The Data Engineering team is primarily responsible for all web data feeds including but not limited to pricing, fulfillment and data warehouse operational feeds. The Co-Op will be using new AI frameworks and technologies, QA AI Automation with data pipelines based on newer Azure fabric.
Qualifications:
* Working towards a degree in Computer Science, Data Analytics, and/or Engineering
* Exposure and training with Azure AI, ChatGPT AI, or experience developing AI agents
* Experience writing APIs or interactive code with AI
* SQL experience
* Python is a plus
* Previous Co-op or Internship experience is a plus
* Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
Individual cohort pay rates vary based on location, academic year, and position.
ME/NC/PA/SC Salary Range: $20.90 - $35.70
IL/MA/MD Salary Range: $22.80 - $37.30
#LI-Hybrid #LI-CW1
At Ahold Delhaize USA, we provide services to one of the largest portfolios of grocery companies in the nation, and we're actively seeking top talent.
Our team shares a common motivation to drive change, take ownership and enable our brands to better care for their customers. We thrive on supporting great local grocery brands and their strategies.
Our associates are the heartbeat of our organization. We are committed to offering a welcoming work environment where all associates can succeed and thrive. Guided by our values of courage, care, teamwork, integrity (and even a little humor), we are dedicated to being a great place to work.
We believe in collaboration, curiosity, and continuous learning in all that we think, create and do. While building a culture where personal and professional growth are just as important as business growth, we invest in our people, empowering them to learn, grow and deliver at all levels of the business.
Senior Data Engineer
Austin, TX jobs
Tecovas was founded with the simple goal of making the world's best western boots, apparel and leather goods - and selling them at a fair price. We are a brand revolutionizing a category and welcoming first-time boot buyers and western enthusiasts alike.
Tecovas is looking for an experienced and forward-thinking Senior Data Engineer to lead the development of scalable, reliable, and high-performance data pipelines while helping shape best practices, mentoring team members, and driving technical direction. Reporting directly to the Director of Data Engineering, you will take ownership of designing and evolving the data architecture that powers critical business analytics and operational decision-making across Tecovas. The Senior Data Engineer position is an ideal role for someone who enjoys working with modern cloud technologies, solving complex data challenges, and partnering closely with business and technical teams to deliver high-impact solutions.
This role is required to be based in Austin, TX. Candidates must either be currently located in or willing to relocate to Austin, TX.
What you'll do:
Architect, design, and maintain scalable, resilient, and high-quality data pipelines and data products using Google Cloud Platform (GCP).
Lead orchestration and workflow management using Apache Airflow, including designing complex DAGs and enforcing best practices.
Develop and optimize enterprise-level data models in BigQuery to support analytics, reporting, and downstream applications.
Drive improvements in data quality, governance, and observability, ensuring completeness, accuracy, and trust in our data.
Influence the overall data engineering strategy, contributing to standards, tooling choices, and best practices across the team.
Collaborate closely with analysts, business stakeholders, and engineering teams to understand evolving data needs and design scalable solutions.
Build and maintain robust Tableau datasets, sources, and dashboards in partnership with analysts.
Troubleshoot and resolve complex pipeline or warehouse issues, focusing on root-cause analysis and long-term reliability.
Document architectures, data models, and ETL/ELT patterns to promote shared understanding and operational excellence.
Stay current with modern data engineering tools, approaches, and trends and introduce them where they create value.
Mentor junior and mid-level engineers, contributing to technical growth across the team.
Experience we're looking for:
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field.
8+ years of experience as a Data Engineer or in a similar technical data role.
Strong expertise with Google Cloud Platform (GCP), including BigQuery, Cloud Storage, Cloud Functions, Dataflow, and Terraform.
Advanced hands-on experience with Apache Airflow, including DAG design and workflow optimization.
Deep understanding of data warehousing concepts and data modeling principles, especially within BigQuery.
Strong proficiency in SQL and experience working with multiple database systems.
Production-level experience with Python or another programming language commonly used in data engineering.
Experience with dbt, including modeling, testing, versioning, and transformation workflows.
Proven ability to design, implement, and maintain scalable ETL/ELT frameworks.
Experience supporting and enabling analytics using Tableau or another BI tool.
Bonus Points if you have:
Experience working in the retail or consumer goods industry, especially supporting e-commerce, merchandising, marketing, or supply chain analytics.
Familiarity with Jinja templating for dynamic SQL generation (in dbt or Airflow).
Experience supporting AI/ML pipelines or marketing analytics.
Background in designing or maintaining data observability systems and tools.
What you bring to the table:
You have excellent communications skills and the ability to collaborate across technical and non-technical teams.
You possess strong analytical and problem-solving skills with high attention to detail.
You enjoy working in a fast-paced, high-growth environment supporting both operational and strategic data use cases.
Full Time Benefits & Perks:
We offer insurance plans that pay 79-90% of your health premium coverage and 100% of your dental & vision insurance coverage for your family/dependents
401(k) match
Paid Parental Leave
Flexible PTO policy
Corporate wellness program
Competitive salary:
$120,000-150,000/annually (commensurate with experience)
Eligibility to participate in Corporate Bonus Program
Generous employee discounts!
About Us:
Based in Austin, TX, Tecovas brings the spirit of the West to the modern consumer. Handcrafting the best Western footwear, workwear, apparel, and accessories, Tecovas has grown rapidly since its founding as the first digitally native Western brand in 2015, serving customers through **************** Tecovas Stores from coast to coast, and select wholesale partners. We're certainly growing- and hiring passionate, humble, positive, and talented people determined to help us continue to grow!
Important note:
We strive to hire values-aligned people because we believe it takes each and all of us to be successful, and lead with grit, speed and a clear vision of where we're headed. In a remote setting, interviewing at Tecovas may include phone interviews, virtual “on-site” interviews, and on-the-job mock cases. We are committed to run a thorough process for candidates with whom we identify a potential match, and we will do our best to follow-up with each and every applicant! If you're on the fence, just give it a try!
We are an Equal Opportunity Employer and we encourage all qualified individuals to apply! Information collected during the application process is subject to our . Please note: Offers of employment may be conditional pending the completion of standard onboarding procedures.
Auto-Apply