Post job

Data engineer jobs in Howell, NJ

- 4,807 jobs
All
Data Engineer
Data Architect
Analytical Data Miner
Data Warehouse Developer
  • Data Engineer Manager

    Wavestone

    Data engineer job in New York, NY

    Be part of a global consulting powerhouse, partnering with clients on their most critical strategic transformations. We are Wavestone. Energetic, solution-driven experts who focus as much on people as on performance and growth. Hand in hand, we share a deep desire to make a positive impact. We are an ambitious firm with a worldwide reach and an ever-expanding portfolio of clients, topics, and projects. In North America, Wavestone operates from hubs in New York City, Pittsburgh, Dallas and Toronto. We work closely with CEOs and technology leaders to optimize IT strategy, sourcing models, and business processes and are committed to building lasting partnerships with our clients. Are you a true team player, living strong values? Are you a passionate learner, aiming to grow every day? Are you a driven go-getter, tackling challenges head-on? Then we could be the right fit for you. Join Wavestone and thrive in an environment that's empowering, collaborative, and full of opportunities to turn today's challenges into tomorrow's solutions - contributing to one or more of our core 4 capabilities: Business Consulting | Business Strategy & Transformation, Organizational Effectiveness & Change Management, Operating Model Design & Agility, Program Leadership & Project Management, Marketing, Innovation, & Customer Experience Technology Consulting | IT Strategy & CTO Advisory, Technology Delivery, Data & Artificial Intelligence, Software & Application: Development & Integration, SAP Consulting, Insurance/Reinsurance Cybersecurity | Cyber Transformation Remediation, Cyber Defense & Recovery, Digital Identity, Audit & Incident Response, Product & Industrial Cybersecurity Sourcing & Service Optimization | Global Services Strategy, IT & Business Process Services Outsourcing, Global In-House Center Support, Services Optimization, Sourcing Program Management Read more at ***************** Job Description As a Data Engineer at a manager level at Wavestone, you will be expected to help address strategic as well as detailed client needs, specifically serving as a trusted advisor to C-level executives and be comfortable supporting and leading hands-on data projects with technical teams. In this role you would be leading or supporting high-impact data transformation, data modernization and data initiatives to accelerate and enable AI solutions, bridging business strategy and technical execution. You will architect and deliver robust, scalable data solutions, while mentoring teams and helping to shape the firm's data consulting offerings and skills. This role requires a unique blend of strategic vision, technical depth, and consulting leadership. Key Responsibilities Lead complex client engagements in data engineering, analytics, and digital transformation, from strategy through hands-on implementation. Advise C-level and senior stakeholders on data strategy, architecture, governance, and technology adoption to drive measurable business value. Architect and implement enterprise-scale data platforms, pipelines, and cloud-native solutions (Azure, AWS, Snowflake, Databricks, etc.). Oversee and optimize ETL/ELT processes, data integration, and data quality frameworks for large, complex organizations. Translate business objectives into actionable technical road maps, balancing innovation, scalability, and operational excellence. Mentor and develop consultants and client teams, fostering a culture of technical excellence, continuous learning, and high performance. Drive business development by shaping proposals, leading client pitches, and contributing to thought leadership and market offerings. Stay at the forefront of emerging technologies and industry trends in data engineering, AI/ML, and cloud platforms. Key Competencies & Skills Strategic Data Leadership: Proven ability to set and execute data strategy, governance, and architecture at the enterprise level. Advanced Data Engineering: Deep hands-on experience designing, building, and optimizing data pipelines and architectures (Python, SQL, Spark, Databricks, Snowflake, Azure, AWS, etc.). Designing Data Models: Experience creating conceptual, logical, and physical data models that leverage different data modeling concepts and methodologies (normalization/denormalization, dimensional typing, data vault methodology, partitioning/embedding strategies, etc.) to meet solution requirements. Cloud Data Platforms: Expertise in architecting and deploying solutions on leading cloud platforms (Azure, AWS, GCP, Snowflake). Data Governance & Quality: Mastery of data management, MDM, data quality, and regulatory compliance (e.g., IFRS17, GDPR). Analytics & AI Enablement: Experience enabling advanced analytics, BI, and AI/ML initiatives in complex environments. Executive Stakeholder Management: Ability to communicate and influence at the C-suite and senior leadership level. Project & Team Leadership: Demonstrated success managing project delivery, budgets, and cross-functional teams in a consulting context. Continuous Learning & Innovation: Commitment to staying ahead of industry trends and fostering innovation within teams. Qualifications Bachelor's or master's degree in Computer Science, Engineering, Data Science, or related field, or equivalent business experience. 8+ years of experience in data engineering, data architecture, or analytics consulting, with at least 2 years in a leadership or management role. Demonstrated success in client-facing roles, ideally within a consulting or professional services environment. Advanced proficiency in Python, SQL, and modern data engineering tools (e.g., Spark, Databricks, Airflow). Experience with cloud data platforms (Azure, AWS, GCP, Snowflake). Relevant certifications (e.g., AWS Certified Data Analytics, Azure Data Engineer, Databricks, Snowflake) are a strong plus. Exceptional problem-solving, analytical, and communication skills. Industry exposure: Deep experience in Insurance, Pharma, or Financial Services Additional Information Salary Range : $157k - $200k annual salary We are recruiting across several levels of seniority from Senior Consultant to Manager. *Only candidates legally authorized to work for any employer in the U.S on a full time basis without the need for sponsorship will be considered. We are unable to sponsor or take over sponsorship of an employment Visa at this time. Our Commitment Wavestone values and Positive Way At Wavestone, we believe our employees are our greatest ambassadors. By embodying our shared values, vision, mission, and corporate brand, you'll become a powerful force for positive change. We are united by a shared commitment to making a positive impact, no matter where we are. This is better defined by our value base, "The Positive Way," which serves as the glue that binds us together: Energetic - A positive attitude gives energy to lead projects to success. While we may not control the circumstances, we can always choose how we respond to them. Responsible - We act with integrity and take ownership of our decisions and actions, considering their impact around us. Together - We want to be a great team, not a team of greats. The team's strength is each individual member, each member's strength is the team. We are Energetic, Responsible and Together! Benefits 25 PTO / 6 Federal Holidays / 4 Floating Holidays Great parental leave (birthing parent: 4 months | supporting parent: 2 months) Medical / Dental / Vision coverage 401K Savings Plan with Company Match HSA/FSA Up to 4% bonus based on personal and company performance with room to grow as you progress in your career Regular Compensation increases based on performance Employee Stock Options Plan (ESPP) Travel and Location This full-time position is based in our New York office. You must reside or be willing to relocate within commutable distance to the office. Travel requirements tend to fluctuate depends on your projects and client needs Diversity and Inclusion Wavestone seeks diversity among our team members and is an Equal Opportunity Employer. At Wavestone, we celebrate diversity and inclusion. We have a strong global CSR agenda and an active Diversity & Inclusion committee with Gender Equality, LGBTQ+, Disability Inclusion and Anti-Racism networks. If you need flexibility, assistance, or an adjustment to our recruitment process due to a disability or impairment, you may reach out to us to discuss this. Feel free to visit our Wavestone website and LinkedIn page to see our most trending insights!!
    $157k-200k yearly 4d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data engineer job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 1d ago
  • Data & Performance Analytics (Hedge Fund)

    Coda Searchβ”‚Staffing

    Data engineer job in New York, NY

    Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing. Responsibilities This role will be responsible for Compiling periodical fund performance analyses Review and analyze portfolio performance data, benchmark performance and risk statistics Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems Apply queries, pivot tables, filters and other tools to analyze data. Maintain client relationship management database and providing reports to Directors on a regular basis Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization Provide various ad hoc data research and analysis as needed. Desired Skills and Experience Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience Proficiency in Microsoft Office, particularly Excel Modeling Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint Outstanding communication skills, proven ability to effectively work with all levels of Managment Comfortable working in a fast-paced, dead-line driven dynamic environment Innovative and creative thinker Must be detail oriented
    $68k-96k yearly est. 4d ago
  • Senior Data Engineer

    Godel Terminal

    Data engineer job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: β€œHow on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 19h ago
  • Senior Data Engineer

    Tekfortune Inc.

    Data engineer job in Iselin, NJ

    Sr. Data Engineer (Snowflake, Databricks, Python, Pyspark, SQL and Banking) Iselin, NJ (Need local profiles only) In-Person interview will be required. Over all 11+ Years of Experience & Recent experience with banking domain experience required. Only W2 & Visa Independent candidates Required experience: Job Description: We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic Team. Responsibilities: Understand technical specifications Business requirements discussion with business analyst and business users Python/SQL Server/Snowflake/Databricks application development and system design Develop and maintain data models and schemas to support data integration and analysis. Implement data quality and validation checks to ensure accuracy and consistency of data. Execution of UT and SIT with business analysts to ensure of high-quality testing Support for UAT with business users Production support and maintenance of application platform Qualifications: General: Around 12+ years IT industry experience Agile methodology and SDLC processes Design and Architecture experience Experience working in global delivery model (onshore/offshore/nearshore) Strong problem-solving and analytical skills Self-starter, collaborative team player and works with minimal guidance Strong communication skills Technical Skills: Mandatory (Strong) - Python, SQL server and relational database concepts, Azure Databricks, Snowflake, Scheduler (Autosys/Control-M), ETL, CI/CD Plus: PySpark Financial systems/capital markets/credit risk/regulatory application development experience
    $82k-112k yearly est. 2d ago
  • Senior Data Engineer

    Apexon

    Data engineer job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certifiedβ„’ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 3d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Data engineer job in Hamilton, NJ

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 3d ago
  • Data Engineer

    Beauty By Imagination (BBI

    Data engineer job in New York, NY

    About Beauty by Imagination: Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike. Position Overview: We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business. You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse. Key Responsibilities Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts. Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools. Integrate and transform data from multiple systems, including ERP platforms such as NetSuite. Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability. Support and enhance Power BI dashboards and other BI/reporting systems. Implement data quality checks, automation, and process monitoring. Collaborate with business and analytics teams to translate requirements into scalable data solutions. Contribute to data governance, standardization, and documentation practices. Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools. Required Qualifications Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts). Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows. Strong SQL skills and experience with Microsoft SQL Server. Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik). Understanding of data modeling, performance optimization, and relational database design. Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation. Excellent analytical and communication skills. Preferred Qualifications Experience with cloud data platforms (Azure, AWS, or GCP). Understanding of data security, governance, and compliance (GDPR, SOC2). Experience with API integrations and real-time data ingestion. Background in finance, supply chain, or e-commerce analytics. Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.). AI Focused Preferred Skills: Experience implementing AI-driven analytics or automation inside Data Warehouses. Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights. Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL. Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC). Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design. Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools. Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments. Soft Skills Strong analytical and problem-solving mindset. Ability to communicate complex technical concepts to business stakeholders. Detail-oriented, organized, and self-motivated. Collaborative team player with a growth mindset. Impact You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions. Our Tech Stack SQL Server, SSIS, Azure Synapse Python, Airflow, Azure Data Factory Power BI, NetSuite ERP, REST APIs CI/CD (Azure DevOps, GitHub) What We Offer Location: New York, NY (Hybrid work model) Employment Type: Full-time Compensation: Competitive salary based on experience Benefits: Health insurance, 401(k), paid time off Opportunities for professional growth and participation in enterprise AI modernization initiatives
    $90k-123k yearly est. 3d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data engineer job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 19h ago
  • Market Data Engineer

    Harrington Starr

    Data engineer job in New York, NY

    πŸš€ Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 1d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data engineer job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: β€’ Supports review of controls to ensure sufficient protection of enterprise data. β€’ Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. β€’ Updates logical or physical data models based on new use cases. β€’ Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. β€’ Adds to team culture of diversity, opportunity, inclusion, and respect. β€’ Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). β€’ Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills β€’ Formal training or certification on data engineering concepts and 2+ years applied experience β€’ Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases β€’ Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis β€’ Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. β€’ Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). β€’ Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. β€’ Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs β€’ Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar β€’ Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift β€’ Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar β€’ Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills β€’ Knowledge of data governance and security best practices. β€’ Experience in carrying out data analysis to support business insights. β€’ Strong Python and Spark
    $79k-111k yearly est. 1d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    Data engineer job in New York, NY

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 1d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data engineer job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 19h ago
  • Azure Data Engineer

    Wall Street Consulting Services LLC

    Data engineer job in Warren, NJ

    Job Title: Data Engineer - SQL, Azure, ADF (Commercial Insurance) Experience: 12 -20 Years Job Type: Contract Required Skills: SQL, Azure, ADF, Commercial Insurance We are seeking a highly skilled Data Engineer with strong experience in SQL, Azure Data Platform, and Azure Data Factory, preferably within the Insurance domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, integrating data from multiple insurance systems, and enabling analytical and reporting capabilities for underwriting, claims, policy, billing, and risk management teams. Required Skills & Experience Minimum 12+ years of experience in Data Engineering or related roles. Strong expertise in: SQL, T-SQL, PL/SQL Azure Data Factory (ADF) Azure SQL, Synapse, ADLS Data modeling for relational and analytical systems. Hands-on experience with ETL/ELT development and complex pipeline orchestration. Experience in Azure DevOps Git, CI/CD pipelines, and DataOps practices. Understanding of insurance domain datasets: policy, premium, claims, exposures, brokers, reinsurers, underwriting workflows. Strong analytical and problem-solving skills, with the ability to handle large datasets and complex transformations. Preferred Qualifications Experience with Databricks / PySpark for large-scale transformations. Knowledge of Commercial Property & Casualty (P&C) insurance. Experience integrating data from Guidewire ClaimCenter/PolicyCenter, DuckCreek, or similar platforms. Exposure to ML/AI pipelines for underwriting or claims analytics. Azure certifications such as: DP-203 (Azure Data Engineer) AZ-900, AZ-204, AI-900
    $82k-112k yearly est. 2d ago
  • Python Data Engineer

    Tekvana Inc.

    Data engineer job in Iselin, NJ

    Job Title:Data Engineer (Python, Spark, Cloud) Pay :$90000 per year DOE Term : Contract Work Authorization: US Citizens only ( may need Security clearance in future) Job Summary: We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting. Key Responsibilities: Build and maintain ETL pipelines using Python and PySpark for batch and streaming data. Develop data ingestion frameworks for structured/unstructured sources. Implement data workflows using Airflow and integrate with Kafka for real-time processing. Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift). Optimize SQL queries and ensure data quality and governance. Collaborate with data architects and analysts to deliver reliable data solutions. Required Skills: Python (3.x) - scripting, API development, automation. Big Data: Spark/PySpark, Hadoop ecosystem. Streaming: Kafka. SQL: Oracle, Teradata, or SQL Server. Cloud: Azure or GCP (BigQuery, Dataflow). Containers: Kubernetes/OpenShift. CI/CD: GitHub, Jenkins. Preferred Skills: Airflow for orchestration. ETL tools (Informatica, Talend). Financial services experience. Education & Experience: Bachelor's in Computer Science or related field. 3-5 years of experience in data engineering and Python development. Keywords for Visibility: Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering. Reply with your profiles to this posting and send it to ******************
    $90k yearly 2d ago
  • Data Architect

    Radiant Digital 4.1company rating

    Data engineer job in Piscataway, NJ

    Data Architecture & Modeling Design and maintain enterprise-level logical, conceptual, and physical data models. Define data standards, naming conventions, metadata structures, and modeling best practices. Ensure scalability, performance, and alignment of data models with business requirements. Data Governance & Quality Implement and enforce data governance principles and policies. Define data ownership, stewardship, data lineage, and lifecycle management. Lead initiatives to improve data quality, consistency, and compliance. Enterprise Data Management Develop enterprise data strategies, including data integration, master data management (MDM), and reference data frameworks. Define and oversee the enterprise data architecture blueprint. Ensure alignment between business vision and data technology roadmaps.
    $99k-141k yearly est. 19h ago
  • Data Analytics Engineer

    Dale Workforce Solutions

    Data engineer job in Somerset, NJ

    Client: manufacturing company Type: direct hire Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets. This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams. This role is on-site five days per week in Somerset, NJ. Key Responsibilities Power BI Reporting & Administration Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions Develop and maintain data models to ensure accuracy, consistency, and reliability Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance Optimize Power BI solutions for performance, scalability, and ease of use ETL & Data Warehousing Design and maintain data warehouse structures, including schema and database layouts Develop and support ETL processes to ensure timely and accurate data ingestion Integrate data from multiple systems while ensuring quality, consistency, and completeness Work closely with database administrators to optimize data warehouse performance Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed Training & Documentation Create and maintain technical documentation, including specifications, mappings, models, and architectural designs Document data warehouse processes for reference, troubleshooting, and ongoing maintenance Manage data definitions, lineage documentation, and data cataloging for all enterprise data models Project Management Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team Collaborate with key business stakeholders to ensure departmental reporting needs are met Record meeting notes in Confluence and document project updates in Jira Data Governance Implement and enforce data governance policies to ensure data quality, compliance, and security Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness Routine IT Functions Resolve Help Desk tickets related to reporting, dashboards, and BI tools Support general software and hardware installations when needed Other Responsibilities Manage email and phone communication professionally and promptly Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel Perform additional assigned duties as needed Qualifications Required Minimum of 3 years of relevant experience Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience Experience with cloud-based BI environments (Azure, AWS, etc.) Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica) Proficiency in SQL for data extraction, manipulation, and transformation Strong knowledge of DAX Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake) Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools Strong analytical, problem-solving, and documentation skills Excellent written and verbal communication abilities High attention to detail and strong self-review practices Effective time management and organizational skills; ability to prioritize workload Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
    $82k-112k yearly est. 19h ago
  • Data Engineer

    Neenopal Inc.

    Data engineer job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 2d ago
  • Data Engineer

    Haptiq

    Data engineer job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 2d ago
  • Data Engineer

    Beaconfire Inc.

    Data engineer job in East Windsor, NJ

    πŸš€ Junior Data Engineer πŸ“ E-Verified | Visa Sponsorship Available πŸ” About Us: BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer! If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment. 🌟 πŸŽ“ Qualifications We're Looking For: Passion for data and a strong desire to learn and grow. Master's Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field. Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!) Experience with relational databases like SQL Server, Oracle, or MySQL. Strong written and verbal communication skills. Ability to work independently and collaboratively within a team. πŸ› οΈ Your Responsibilities: Collaborate with analytics teams to deliver reliable, scalable data solutions. Design and implement ETL/ELT processes to meet business data demands. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to optimize data flows. Create automated unit tests and participate in integration testing. Troubleshoot and resolve operational and performance-related issues. Work with architecture and engineering teams to implement high-quality solutions and follow best practices. 🌟 Why Join BeaconFire? βœ… E-Verified employer 🌍 Work Visa Sponsorship Available πŸ“ˆ Career growth in data engineering and BI 🀝 Supportive and collaborative work culture πŸ’» Exposure to real-world, enterprise-level projects πŸ“© Ready to launch your career in Data Engineering? Apply now and let's build something amazing together! πŸš€
    $82k-112k yearly est. 1d ago

Learn more about data engineer jobs

How much does a data engineer earn in Howell, NJ?

The average data engineer in Howell, NJ earns between $71,000 and $128,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Howell, NJ

$96,000
Job type you want
Full Time
Part Time
Internship
Temporary