Lead Data Scientist
Senior Data Scientist Job 25 miles from Massapequa
We are seeking an individual to work with a Fortune 50 Broadcast Media & Entertainment leader located in New York, New York. As the Experimentation Manager, you will be directly responsible for being the experimentation leader for Advertisements. In this role, you will have the opportunity to partner with Ads, product, engineering, and other experts to shape a shared experimentation playbook to enable trustworthy personalization and visualizations for consumers.
Minimum Qualifications:
4+ years of applied experience in Ads experimentation at an eCommerce, social network, direct-to-consumer, media entertainment company or similar tech company
BA/BS in technical or quantitative discipline or significant work experience
Experience with experiment design and platforms (i.e., in house built, Optimizely, split.io, etc.)
Extensive knowledge of statistics and analytical concepts
Coding skills for analytics and data analysis (Python, R, SQL)
Experience in using data to drive product decisions and change opinions
Direct experience working with a diverse set of stakeholders
Utilize excellent communication skills to clearly distill the essence of your technical work to audiences of all levels and across multiple functional areas
Responsibilities:
Work with Ads, Product, Decision Science, UX and Engineering to lay out a comprehensive experimentation playbook and roadmap
Manage an experimentation platform roadmap that supports stakeholder goals and advanced the experimentation practice
Develop an end-to-end experimentation process that is deeply integrated with Ads, product, and engineering development lifecycle
Translate insights into Ads and product strategy to empower execution in a data-informed manner
Deeply understand the ecosystem, defining and analyzing metrics that evaluate the trajectory and inform the success of products
Communicate insights, vision and strategy with executive leadership and key stakeholders
Present research on statistics, experimentation, revenue impact, ads impact, user behavior, product enhancements, and metrics development/methodology of all levels of the company ensuring transparency and partnership
Foster a culture of execution excellence and analytical rigor
What's in it for you?
Working for a well-known, globally leading Media Streaming organization
Exposure to high-level business professionals in a variety of departments and geographic locations
Opportunity to work and grow your career in fast-paced environment
Lead Data Scientist
Senior Data Scientist Job 25 miles from Massapequa
Orange Quarter is partnering with a vertical SaaS rocketship, revolutionizing business intelligence in the retail/CPG space.
They're currently at a Series D round of funding and sit around 1000+ employees and are continuing their growth in both North America and Europe.
What to expect:
The Lead Data Scientist will spearhead an organization-wide shift towards data-driven decision making. Alongside a team of junior data scientists, you'll be working with data from product, sales, marketing, finance, and HR to help us identify inefficiencies and potential solutions.
Perks:
Hybrid flexibility
Great work life balance, built on collaboration, flexibility and respect
Surrounded by an expert team to help you develop and progress
This is a new business unit, so there is a strong pathway for growth in the business
Requirements:
6+ years experience in data science, analytics, or machine learning
Expert knowledge of Python and end-to-end understanding of the data platform
Strong SQL and Tableau, able to get creative when it comes to data visualization
Experience and/or strong interest in the fashion space is a plus
Data Scientist
Senior Data Scientist Job 27 miles from Massapequa
About the Company:
ICON International provides clients with financial solutions built around the concept of corporate barter. We help businesses leverage their underperforming assets by trading those goods for high-value professional products/services. It's a complex and dynamic field, driven by strategy, integrity, and creativity.
About the Role:
We are seeking an experienced Data Scientist to join the Data Science department within the ICON Technology Group (ITG), focusing on developing and implementing solutions to streamline processes. The ideal candidate will be a key contributor and will work directly with internal stakeholders to define requirements and deliver solutions that meet stakeholder needs. ICON greatly values the productivity that comes from in-person collaboration, particularly in ITG.
Our Data Science team supports ICON's core business by delivering solutions that increase profitability via insightful reporting and achieve cost savings through automation. We use Domo as our primary data visualization tool, giving our executives real-time views of the business and giving users actionable daily reporting. Domo is powered by data sources from ODBC SQL queries, API connectors, Excel uploads, and most notably, Hitachi's Pentaho robust ETL engine. Candidates should be adept at pulling data from various providers and platforms (e.g., Meta, GA4, Google Campaign Manager, Adobe Analytics) via API connectors, using Domo's ETL functionality to transform that data and designing clear dashboards to give clients real time insights about their digital media. Candidate should also be comfortable leveraging technology (typically Python) to implement automation in our processes to streamline data reporting and compress delivery timelines.
Responsibilities:
Work with client facing departments to understand client information needs; identify and configure data sources to feed data into our environment.
Maintain and improve existing Python-based systems: e.g., every week, ICON receives hundreds of log files with important but disorganized data; we employ a complex Python process to automatically ingest, compile, and transform those files into standardized weekly client reporting; candidate would continue improvements to that process.
Develop end-to-end solutions: identify data sources, develop API & ETL processes to generate production-ready datasets, perform data validation, and build dashboards.
Serve as an automation solutions architect- develop a framework for connecting and aggregating disparate data in efficient manner; streamline processes to ensure quick\real-time reporting to internal and external stakeholders; create reusable templates that can be used to support dashboard design and delivery.
Develop and integrate internal and external client-facing reporting to ensure relevant metrics and insights are shown and understood.
Employees are expected to adopt a schedule of being physically in office 4 to 5 days per week.
Monitor current trends in key business areas, providing valuable insights to our Digital Media department to improve overall business performance and foster organic growth.
Adhere to and document standards and practices employed in delivering solutions.
Requirements:
4-5+ years of experience, bachelor's degree in data science, computer science, statistics, information systems, or related field.
Experience leveraging expert Python and SQL skills in a business environment; work in predictive analytics is a plus.
Intermediate/Advanced Excel skills (minimum vlookup, pivot tables, conditional formatting, named ranges, etc.).
(Preferred) Experience working in digital media environment; familiar with concepts like attribution, floodlight tags, click-through rates, etc.
(Preferred) Experience with business intelligence and data visualization tools, such as Domo, Data Studio, etc.
(Preferred) Experience in R, SAS, Python, MATLAB or other data analysis/statistical languages.
Strong problem solving and analytical skills.
Effective written communication skills - ability to express ideas clearly and concisely; Candidates shall demonstrate this by submitting a single-page resume.
Strong business analysis and verbal skills are a must, as this position interfaces with users at various levels of the business. Qualified candidates will demonstrate effective problem resolution skills, communicate solutions clearly and effectively and work cooperatively within the ICON Technology Group.
Equal Opportunity Statement:
At ICON, we are devoted to expanding opportunities for all employees without regard to race, color, religion, gender, age, national origin, sexual orientation, gender expression, disability and any other characteristic. Each person is valued for their talents, expertise, experience and perspective.
Senior Data Scientist
Senior Data Scientist Job 27 miles from Massapequa
Are you an experienced Data Scientist with specific experience working with Financial Data, specifically in Credit Risk Analytics, this opportunity is for you!
Exciting Senior Level Data Scientist opportunity with a well-known, rapidly growing Commercial Finance leader. This is risk analytics and data science role dedicated to successfully managing the firm's credit and enterprise risk. You will use cutting edge statistical analysis, and machine learning to explore and analyze large complex datasets to identify patterns, trends, and correlations. Competitive compensation, tremendous benefit package and 401k. Cutting edge technology, incredible culture and working environment!
Employee testimonials:
Fantastic Place to Work. Hope to finish out the rest of my career with this great company!
Good salary and package deal Friendly environment Great career advancement Everyone loves working here and you feel it!
Great company, culture, and work-life balance!
Title: Sr Data Scientist - Risk Analytics
Location: Stamford, CT
Salary: $170,000 - $190,000 + Generous Bonus, Incredible benefits, and retirement package!
Responsibilities:
You will Lead the design and development of enterprise-grade deep analytics, exploring and analyze large, complex datasets to identify patterns, trends, and correlations. You will drive collaboration with cross-functional teams to lead efforts to identify improvement opportunities, source data, create analyses, design solutions, and implement change.
You will be responsible for developing complex datasets, extracting meaningful insights, deploying predictive models & data driven solutions to optimize portfolio performance and manage risk appetite.
You will play a leadership role in advancing the capabilities and maturity of the firm's Risk Management and Strategic Planning functions. This is a risk analytics and data science role dedicated to successfully managing the firm's credit and enterprise risk.
Optimize firm's analytical capabilities, data driven decisioning, and overall performance.
Drive business results by enhancing the firm's data continuum including data architecture, data analytics, and data governance and their unique interdependencies.
Create useful, timely, and actionable analytical positions based on large structured and unstructured data sets.
Ensure analytical positions foster action and inform decisioning to enhance critical business processes and maximize outcomes.
Communicate analytical positions, and recommendations to stakeholders through clear and compelling data visualizations, reports, and presentations.
Implement automation and streamline processes to optimize the entire data and analytics platform, ensuring efficient throughput and high-performance outcomes.
Lead comprehensive initiative to democratize the use of Advanced Analytics tools (such as Python, R, or SQL) and systems across the Risk and Strategic Planning teams.
Partner with the Data and Business Intelligence teams on the design, development, implementation, operation, and ongoing support of critical systems and tools.
Stay abreast of the latest advancements in data science, machine learning, and analytics technologies.
Represent Risk Management and Strategic Planning in Business Intelligence and Data team design discussions, code reviews, and project-related meetings.
Requirements:
8+ years' experience in Data Science or deep analytics in the financial sector
Background in Credit Risk analytics and strategy.
Proficiency in programming languages such as Python, R, or SQL with SAS experience in data manipulation, statistical analysis, and machine learning.
Experience developing and deploying Data Science solutions leveraging components like Azure OpenAI and Azure Notebooks.
Understanding of banking / financial services industry business model
Excellent communication and collaboration skills, with the ability to distill complex technical concepts into understandable insights for non-technical stakeholders.
Strong analytical mindset and understanding of predictive performance models including application, assessment, and validation
Strong expertise in data visualization tools such as PowerBI or Tableau.
Experience in training colleagues on programming languages/ analytical tools such as Python, R, SQL, and SAS with experience in data manipulation, statistical analysis, and machine learning.
Data Scientist
Senior Data Scientist Job 25 miles from Massapequa
A growing fin-tech firm is looking for a Data Scientist to join their team in New York, NY.
Compensation: $160-200k
Responsibilities:
Develop, deploy, and maintain machine learning models in production environments
Collaborate with cross-functional teams to ensure seamless integration of ML workflows
Utilize and improve Machine Learning Operations (MLOps) pipelines and procedures
Optimize data pipelines and model performance for scalability and efficiency
Conduct exploratory data analysis and feature engineering to support ML initiatives
Qualifications:
Advanced degree (Master's/PhD) in Data Science, Computer Science, Statistics, or a related field
4+ years of experience as a researcher or developer in data science or related fields
Strong experience programming in common data science languages including Python, R, and/or Matlab
Strong experience with machine learning frameworks and tools such as Tensorflow, Tesseract, PyTorch, and/or Keras
Solid understanding of ML Ops practices, tools, and deployment strategies
Proven Data Science experience
Experience in fin-tech or a strong interest in financial technologies is highly desirable
Data Scientist
Senior Data Scientist Job 25 miles from Massapequa
Software Developer for ML Data Engineering
Software developers who have experience working with complex data infrastructure and systems are sought to join our drug discovery software team in New York City. This role offers an exciting opportunity to create data infrastructure for our drug discovery and AI efforts, working closely with a world-class team of chemists, biologists, machine learning engineers and researchers, and computer scientists.
Successful hires will develop systems and infrastructure for modeling, curating, and indexing petabytes of data; implement pipelines for processing computational and experimental data sets; and develop/optimize workflows to deliver large data sets to machine learning training pipelines.
Ideal candidates will have a passion for large-scale data management; strong Python programming skills; and experience with machine learning systems, frameworks, and data lifecycles. Relevant areas of expertise include engineering of large-scale chemical databases, architecting workflows, and automating processes for the handling of life science data, fluency with Linux/UNIX tools, and familiarity writing scientific software, but specific knowledge of any of these areas is less critical than intellectual curiosity, versatility, and a track record of achievement. This is a unique opportunity to contribute to the development of transformative technology within a dynamic, interdisciplinary environment.
Relevant experience:
Demonstrated experience with large-scale data management in the life sciences (ideally dealing with chemistry databases)
Experience architecting infrastructure for the handling of life science data, and writing scientific software
Strong Python programming skills; C++ is a nice-to-have
Fluent knowledge of Linux/UNIX tools
Excellent communication skills
Responsibilities:
Develop systems and infrastructure for modeling, curating, and indexing petabytes of data
Implement pipelines for processing computational and experimental data sets
Develop/optimize workflows to deliver large data sets to machine learning training pipelines
Data Lead
Senior Data Scientist Job 25 miles from Massapequa
Full Time | Leopard | New York, NY / Hybrid | Data (Science/Engineering) Lead
Leopard is an early-stage B2B insurtech startup currently seeking a full-time Data (Science/Engineering) Lead to help us modernize the life insurance and annuities markets as the first dedicated member of our data team. You will work closely with our CTO and engineering team to build and grow our policy management platform and deliver our clients data-driven analytics and AI-enabled workflows. If you are passionate about using state-of-the-art AI to deliver transformative products, excited to be an early part of our journey bringing data and analytics to insurance distribution, and want to contribute to a high-performing team with unlimited room for growth, then this is the perfect opportunity for you.
Note: While the role will primarily focus on hands-on research and development to start, we are open to candidates who prefer to frame the role on the management track and can discuss title and level as appropriate
In your first six months at Leopard, you will:
Build and enhance our data pipelines using a combination of Node and Python on AWS, with a focus on accuracy and reliability
Conduct research studies involving predictive modeling and prompt engineering, working with our CTO and engineering team to ensure we are maximizing our use of LLMs and other AI technologies in the platform
Develop our initial work processes and success metrics for our data team to enable its contribution to our feature roadmap and further growth in 2025
Stay up-to-date with continued developments in AI/ML, NLP, and related areas of research, ensuring Leopard remains on the cutting edge of using these tools to build products and workflows that revolutionize the insurance industry
A little about you:
Have 7-10+ years of experience in a variety of data roles, ideally including data engineering, data science, and/or data analytics
Strong analysis skills using Python/Numpy/Pandas, complemented with some development experience in Python, Javascript, or similar to build reliable, cloud-based data pipelines
Professional experience with prompt engineering and LLMs
Experience leading and mentoring data professionals and/or building a data team from the ground up
Owned projects end-to-end and delivered them by hitting key milestones
Have great communication skills; you work well with others, and you care more about solving the problem than coming up with the best solution yourself
Earned a Bachelor's degree in Data Science, Computer Science, or Mathematics or have an equivalent combination of education and work experience
Are (or can be) located in or around the New York area, with the ability and desire to work in our office (near Lincoln Center) at least 1-2x per week
Extra Points:
Experience in, or familiarity with, the life insurance industry
Early-stage startup experience
Professional working proficiency in Spanish
Salary range:
The base salary range for this role is $175,000 - $225,000. The base salary range represents the anticipated low and high end of the salary range for this position. Actual salaries may vary and may be above or below the range based on various factors, including but not limited to work location, experience, and expected performance. The range listed is just one component of Leopard's total compensation package for employees. Other rewards may include equity awards and other long and short-term incentives. In addition, Leopard provides a variety of benefits to employees, including health insurance coverage, a 401K program, paid holidays, and encouraged paid time off (PTO).
About Leopard:
Leopard is an early-stage insurance technology startup looking to revolutionize the life insurance and annuity markets. We've developed technology that makes it easy for insurance brokers and financial advisors to find best-fit coverage for their clients on an ongoing basis, but that's just the start. Our mission is to build a data business that fundamentally changes the way life and annuities products are sold. Leopard is backed by The D.E. Shaw Group, one of the largest hedge funds in the United States, known for its quantitative rigor. Founded in 2023, Leopard is headquartered in New York, New York. For more information about Leopard, visit *******************
At Leopard, we are committed to hiring diverse talent from different backgrounds and as such, it is important for us to provide an inclusive work environment for all. We do not discriminate on the basis of race, gender identity, age, religion, sexual orientation, veteran or disability status, or any other protected class. As an equal-opportunity employer, we encourage and welcome people of all backgrounds to apply.
Data Scientist
Senior Data Scientist Job 25 miles from Massapequa
Our client, an established and growing food manufacturer, is seeking a Data Scientist to join their team. The ideal candidate will have a proven track record of turning complex data into actionable insights, leading end-to-end data projects, and leveraging advanced tools to drive decision-making. You will collaborate with cross-functional teams to design, implement, and optimize data-driven strategies that enhance operational efficiency and product innovation.
This Role Offers:
The opportunity to work with a leading name in the food/beverage manufacturing industry.
A role with significant impact on the company's efficiency and growth.
Competitive compensation and a comprehensive benefits package.
A collaborative work environment that values innovation and leadership.
Focus:
Lead data-focused projects from inception through completion, including data engineering, analysis, predictive modeling, reporting, and visualization.
Utilize Python for advanced data analysis, statistical modeling, and machine learning to develop and optimize solutions.
Design and write efficient SQL queries and stored procedures to extract and transform data for analytical purposes.
Create comprehensive dashboards and data visualizations to communicate findings to stakeholders.
Work closely with cross-functional teams to translate business needs into data-driven solutions.
Skill Set:
Bachelor's degree or higher in Analytics, Statistics, Data Science, or a related field.
A minimum of 3 years of experience in a data analysis or data science role.
Strong knowledge of Python, especially in the areas of data analysis, statistical modeling, and machine learning frameworks.
Expertise in SQL, with advanced skills in writing efficient queries and optimizing database performance.
Proficiency with Microsoft Office Suite (Excel, Word, PowerPoint) for documentation and presentation of findings.
Preferred Skills:
Experience with Apache Airflow for managing and scheduling workflows.
Familiarity with Docker for containerizing workflows and maintaining project environments.
Comfortable with command-line interaction in Unix/Linux operating systems.
Hands-on experience with Plotly Dash or Streamlit for developing data applications and dashboards.
Knowledge of Selenium for web scraping and automated data collection is a plus.
About Blue Signal:
Blue Signal is an award-winning, executive search firm specializing in food & agriculture recruitment. Our food & agriculture recruiting team unites professionals in agribusiness, food processing, and agricultural technology with innovative companies. Learn more at bit.ly/40LrcFx
Principal Data Engineer
Senior Data Scientist Job 25 miles from Massapequa
Ichor Strategies seeks a Principal Data Engineer to lead the data architecture of our nascent SaaS platform which connects companies to the communities they operate in so that they can understand different perspectives, build authentic relationships, and foster mutual success. The ideal candidate will have extensive experience in driving architecture, making individual code contributions, and evolving a prototype into a scalable, extensible data platform.
Ichor Strategies presents some rare opportunities for a Software Engineer to work at an organization that
is an MBE-certified Black-owned business
is truly mission-driven
has a truly diverse staff and inclusive culture
is one of Crain's top 100 best places to work in NY
Reporting to the Director of Engineering, this position is fully remote, with an option to work a hybrid schedule in one of our offices in Brooklyn, Chicago, or Atlanta if you prefer. This position requires a minimum 40-hour workweek and occasional evening and/or weekend work, depending upon the workload.
Duties & Responsibilities:
Set architectural direction for the data systems that power our application, and help evolve those systems toward that architectural direction. This includes ingestion and processing pipelines, data storage layers including a robust multi-purpose data lake, and a data access layer including materialized views that are optimized for application performance
Directly contribute code that is readable, maintainable, and thoroughly tested
Incorporate theoretical and practical knowledge of non-functional requirements relevant to distributed data engineering such as scalability, availability, extensibility, testability, etc.
Build prototypes and proofs of concepts as needed to aid in technical decision making
Adhere to and advocate for software engineering best practices
Work to improve and migrate existing code, making deliberate and thoughtful tradeoffs where necessary.
Be independently responsible for the entire lifecycle of projects and systems, including design, development, and deployment
Collaborate with data engineers, application engineers, and other stakeholders across the organization
Break down complex projects into simple systems that can be built and maintained by less experienced engineers
Be considered an expert by peers, recognized for high quality and quantity of hands-on technical contributions
Mentor and assist other engineers via pair programming, code reviews, knowledge sharing presentations, etc.
Improve productivity and velocity across the team by creating tooling, reusable components, streamlined processes, etc.
Education & Experience:
10+ years of experience as a professional Software Engineer
7+ years of experience working with distributed ingestion, processing, storage, and access of big data (bonus points for experience with AI/ML)
7+ years of experience leveraging tools and infrastructure provided by GCP, AWS, or Azure
Experience at an early-stage startup taking a product from 0 to 1
Skills & Abilities:
Deep knowledge and experience with architectures for modern data infrastructure including data lakes, data warehouses, ETL pipelines, physical and logical data formats, data processing systems, data reliability, security, governance, and performance
Deep knowledge of different kinds of data stores (row-oriented, columnar, key/value, document, graph, etc.) and their use cases and tradeoffs
Proficiency with various big data technologies including some of these: BigQuery, Redshift, Snowflake, Parquet, Avro, Beam, Spark, Flink, GCP Dataflow, AWS Glue, Azure Data Factory
Expertise in Java (bonus points for experience with TypeScript)
High standards and expectations for software that is thoughtfully and meticulously engineered
Excellent written and verbal communication skills
Empathy for others
About Ichor
Ichor Strategies is a management consulting firm specializing in connecting businesses to the communities in which they operate to build impactful strategies that deliver tangible results. We are a trusted advisor to Fortune 100 companies, providing a combination of strategic communications, policy support, and relationships in urban communities.
We are a powerhouse team of passionate advisors and experts with a combination of business acumen and cultural fluency, operating at the intersection of urban communities and major corporations. Trusted by those organizations and communities alike, we are uniquely positioned to bridge gaps to progress and unlock powerful opportunities for mutual success.
A certified MBE, our diversity powers our ability to access all communities and understand nuances that others might miss. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Senior Data Architect
Senior Data Scientist Job 27 miles from Massapequa
ThoughtFocus' innovation center brings technology of the future to solve the business challenges of today. Our business solutions are built by experts who understand the challenges that are unique to our clients' organizations and industries.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect, to join our team.
The Architect will be responsible for designing the overall Data Architecture, select various tools and design the system to optimally support reporting, real time data syndication for transaction processing from systems. The architect will be the overall lead of the project.
Essential Job Functions
Expertise in building Data Assets / Data Warehouse on Azure ecosystem.
This position calls for experience in strategizing cloud data warehouse architecture on the MS Azure Ecosystem.
Understanding of Cloud Data Services for OLTP systems.
Should have background in Private Equity / Capital Market industry.
Understanding of application architecture and deployment in cloud environments.
The day-to-day responsibilities would include offering expert guidance to development teams on various facets of cloud data architecture.
The architect needs to take responsibility for the project throughout their lifecycles.
The role is required to design and implement vendor system integration points.
Technical Skills and Experience
10+ years overall IT industry experience in the private Equity / capital market
5+ years leading teams and in a solution architecture role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms.
Bachelor's degree in computer science or equivalent experience.
Proficiency with Azure Data Factory, Azure Data Lake, Azure Synapse, Azure Purview
Expertise with API Integrations
Expertise with Databricks.
Experience with relational, graph and/or unstructured data technologies such as SQL Server, Azure SQL, Azure Data
Knowledge of cloud security controls
Experience with multi-tier system and service design and development for large enterprises
Experience with configuration management and automation tools using Azure Integration Services
Knowledge of programming and scripting languages such as JavaScript, PowerShell, SQL, .NET, etc.
Exposure to infrastructure and application security technologies and approaches
Proficiently with different operating systems, viz. Windows, Mac, and Linux.
Should be able to bring in the thought leadership in designing a cloud data architecture.
Sr. Data Architect - Engineer 3
Senior Data Scientist Job 25 miles from Massapequa
Dexian is seeking a Senior Data Architect (Engineer 3) for an opportunity with a client located in Dublin, CA, or New York City, NY.
Responsibilities:
Understand what the data is and how it is used in the business processes
Drive the standards for Data Quality across multiple data-sets and drive for data trust across enterprise
Define and own Data quality contract with source team and consistency is maintained in Data warehouse
Work with business partners to define ways to leverage data to develop platforms and solutions to drive business growth
Knowledge of external and internal data capabilities and trends, provides leadership and facilitates the evaluation of vendors and products, contributes to the provision of oversight and governance for configuration and implementation of products
Responsible for the technical direction of data/information relating to applications within a portfolio and/or delivery of secure data architecture
Monitor the work of vendor partner resources and Database Administrator/Analysts
Collaborate with key stakeholders to ensure data architecture alignment with key portfolio initiatives on the basis of cost, functionality and practicality
Create documentation and presentations, lead discussions with business and technology owners to receive buy-in and approval
Document and articulate relationships between business goals and information technology solutions
Requirements:
At least 10 years in-depth, data engineering experience and execution of data pipelines, data ops, scripting and SQL queries
5+ years proven data modeling skills - must have demonstrable experience designing models for data warehousing and modern analytics use cases (e.g., from operational data store to semantic models)
2 to 3 years of experience in modern data architecture that support advanced analytics including Snowflake on Azure
Experience with Snowflake and other Cloud Data Warehousing / Data Lake preferred
Bachelor's Degree in Computer Science, Information Systems, Engineering, Business Analytics, or Business Management
Expert in engineering data pipelines using various data technologies - ETL/ELT, big data technologies (Hive, Spark) on large-scale data sets demonstrated through years of experience
Hands on data warehouse design, development, and data modeling best practices for modern data architectures
Highly proficient in at least one of these programming languages: Java or Python
Experience with modern data modelling tools, data preparation tools
Experience with adding data lineage, technical glossary from data pipelines to data catalog tools
Highly proficient in Data analysis - analyzing SQL, Python scripts, ETL/ELT transformation scripts
Highly skilled in data orchestration with experience in tools like Ctrl-M, Apache Airflow. Hands on DevOps/Data Ops experience required
Experience with StreamSets, DBT preferred
Knowledge/working experience in reporting tools such as MicroStrategy, Power BI would be a plus
Self-driven individual with the ability to work independently or as part of a project team
Experience working in an Agile Environment is preferred
Familiarity with Retail domain preferred
Strong communication skills are required with the ability to give and receive information, explain complex information in simple terms and maintain a strong customer service approach to all users
The ideal candidate will be conversant with modern data engineering principles:
Utilization of medallion architecture
Snowflake on Azure, StreamSets, DBT, Power BI stack
Solid orchestration and scheduling - Control M or Airflow
Well-versed with naming conventions, standards and best practices, automation for large scale delivery, data ops, data observability and data quality
Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals.
Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit ******************* to learn more.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Senior Data Engineer
Senior Data Scientist Job 25 miles from Massapequa
At Roots, our mission is to make work more human. We are developing fully autonomous, AI-powered Digital Coworkers that streamline tedious and repetitive tasks for the enterprise. By tackling core challenges in natural language understanding and computer vision, we are building an automation product that embodies the future of work. Our platform makes automation accessible to everyone, enabling users to generate automations by describing tasks in simple English, while solving complex business problems with enterprise-grade results and performance.
Harnessing the power of AI, machine learning, and analytics on our data is fundamental to our business. Our production processes capture vast amounts of data, which feed directly into our training and analytics pipelines. As this data continues to grow, we need a data engineer to design and build the data architecture that underpins our AI systems and supports our evolving needs for reporting and business analytics.
At Roots, we are committed to building a team of talented individuals who share our love for innovation and problem-solving. We encourage curious minds from a wide array of disciplines and backgrounds to apply.
Responsibilities:
Collaborate with stakeholders across engineering, research, product, sales, and marketing to capture their data production workflows.
Design and build ETL pipelines or similar methodologies to merge and refine data into a centralized data lake, optimizing it for analytics, reporting, and model training.
Implement de-identification techniques to protect sensitive information.
Build, monitor and maintain data infrastructure, ensuring reliability, performance, scalability, and cost-effectiveness.
Ensure security, integrity, and compliance of data according to industry standards
Qualifications:
4+ years of experience as a data engineer.
Expertise in ETL scheduling tools like Airflow, Prefect, Luigi, or comparable frameworks.
Proficiency in Python.
Experience in working with databases, data warehouses, and data lakes.
Experience with DataBricks, Snowflake, or similar data platforms
Strong written and verbal communication skills, with a particular emphasis on the written word. We greatly appreciate public articles or blogs that showcase writing skills.
Ability to work independently, prioritize tasks, and manage multiple projects simultaneously in a fast-paced and dynamic environment.
As a startup, Roots Automation offers a high-paced environment with ample growth and learning opportunities across multiple disciplines.
AWS Data Engineer - Databricks / Starburst
Senior Data Scientist Job 25 miles from Massapequa
SOFT's client located in New York, NY ( Hybrid ) is looking for a DevOps Cloud Engineer - Databricks / Starburst / Terraform for a long-term contract assignment.
PLEASE NOTE THE FOLLOWING BEOFRE APPLYING:
WE ARE NOT ACCEPTING ANY 3RD PARTY SOLICITATIONS FOR THIS OR ANY OF OUR JOB POSTINGS OR REQUSITIONS. ANY SUCH INQUIRIES WILL NOT BE CONSIDERED OR RESPONDED TO.
WE CAN ONLY WORK WITH DIRECT APPLICANTS WHO ARE AUTHORIZED TO WORK IN THE US WITHOUT SPONSORSHIP
THIS IS A HYBRID ROLE, REQUIRING 2 DAYS A WEEK AT THE CLIENT SITE IN LOWER MANHATTAN. THUS, ANY CANDIDATES MUST BE IN THE NY/NJ/CT TRI-STATE AREA AT THE TIME OF APPLICATION. CANDIDATES OUTSIDE OF THIS RADIUS WILL NOT BE CONSIDERED
Qualifications:
- 6-10 years of Cloud Infrastructure Engineering Experience with AWS
- Experience building and managing enterprise cloud infrastructure.
- Strong hands-on experience on AWS and/or Azure environments
- Strong hands-on experience with Infrastructure as Code (IaC) using Terraform, CloudFormation.
- Strong hands-on experience working with various AWS services including but not limited to IAM, EC2, S3, ELB/ALB, - RDS/Aurora, ElastiCache and serverless computing services
- Strong hands-on experience with Databricks and Starburst data platforms on AWS cloud.
- Demonstrated experience of ownership and responsibility for end-to-end design, development, testing, release of key components of data lake solution using Databricks and Starburst platforms.
- Demonstrated experience of developing, maintaining data lakes with advanced capabilities such as delta sharing capabilities
- Strong understanding of best practices in management of data, including master data, reference data, metadata, data quality and lineage.
- Proven experience in setting up DevOps infrastructure, CI/CD pipelines, driving automated build management using GitLab preferred.
- Extensive expertise in providing guidance, building highly available/fault-tolerant enterprise class infrastructure with multiple-region and multi-AZ models.
- Working knowledge of implementing cloud identity and access management solutions to enforce security guidelines.
- Working knowledge and experience with DevSecOps operating model
- Experience working in Agile teams.
- Development experience using Python and/or Java preferred.
- Proven experience translating architectural plans and business requirements into infrastructure implementations.
- Self-starter with the ability to effectively plan, prioritize and manage multiple projects, tasks and deliverables
throughout project lifecycle.
Responsibilities:
- 6-10 years of Cloud Infrastructure Engineering Experience with AWS
- Experience building and managing enterprise cloud infrastructure.
- Strong hands-on experience on AWS and/or Azure environments
- Strong hands-on experience with Infrastructure as Code (IaC) using Terraform, CloudFormation.
- Strong hands-on experience working with various AWS services including but not limited to IAM, EC2, S3, ELB/ALB, - RDS/Aurora, ElastiCache and serverless computing services
- Strong hands-on experience with Databricks and Starburst data platforms on AWS cloud.
- Demonstrated experience of ownership and responsibility for end-to-end design, development, testing, release of key components of data lake solution using Databricks and Starburst platforms.
- Demonstrated experience of developing, maintaining data lakes with advanced capabilities such as delta sharing capabilities
- Strong understanding of best practices in management of data, including master data, reference data, metadata, data quality and lineage.
- Proven experience in setting up DevOps infrastructure, CI/CD pipelines, driving automated build management using GitLab preferred.
- Extensive expertise in providing guidance, building highly available/fault-tolerant enterprise class infrastructure with multiple-region and multi-AZ models.
- Working knowledge of implementing cloud identity and access management solutions to enforce security guidelines.
- Working knowledge and experience with DevSecOps operating model
- Experience working in Agile teams.
- Development experience using Python and/or Java preferred.
- Proven experience translating architectural plans and business requirements into infrastructure implementations.
- Self-starter with the ability to effectively plan, prioritize and manage multiple projects, tasks and deliverables throughout project lifecycle.
Data Engineer - Spark / Kafka / AWS
Senior Data Scientist Job 25 miles from Massapequa
Hybrid role in New York City. Our client, a major US-based Sports & Entertainment organization, is looking to engage an experienced Data Engineer to join a group responsible for developing and extending their next generation data platform. This platform will enable fan to access personalized content and related offerings through advanced analytics. The platform is being built leveraging state of the art cloud components and capabilities.
In this role, you will be responsible for building scalable solutions for data ingestion, processing and analytics that crosscut data engineering, architecture and SW development. You will be involved with designing and implementing cloud-native solutions for ingestion, processing and compute at scale
Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $70 - $80 / hr. w2
Responsibilities
Design, implement, document and automate scalable production grade end to end data pipelines including API and ingestion, transformation, processing, monitoring and analytics capabilities while adhering to best practices in software development.
Build data-intensive application solutions on top of cloud platforms such as AWS, leveraging state-of-the-art solutions including distributed compute, lake house, real-time streaming while enforcing best practices for data modeling and engineering.
Work with infrastructure engineering team to setup infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
Experience Requirements
Bachelor's degree computer science or related field required.
Minimum of 2 years related experience with track record of building production software.
Hands-On experience building and delivering cloud native data solutions (AWS preferred)
Solid Computer Science fundamentals with experience across a range of disciplines, with one or more area of deep knowledge and experience in an advanced programming language.
Working experience of distributed processing systems including Apache Spark.
Hands-On experience with lake house architecture, open table formats such as Hudi, orchestration frameworks such as Airflow, real time streaming with Apache Kafka and container technology.
Deep understanding of best software practices and application of them in data engineering.
Familiarity with data science and machine learning workflows and frameworks
Ability to work independently and collaborate with cross-functional teams to complete projects.
Experience leading integration of technical components with other teams.
SAP Data Migration Lead
Senior Data Scientist Job 25 miles from Massapequa
Good Knowledge of SAP in needed.
Knowledge of ETL Tools is required Such as Informatica Etc.
Minimum 6-7 Data Migration Project Experience on SAP ERP.
Hands on Experience of Driving mapping Session for Converting Source to Target.
Should be Good with building queries for Data Extraction.
Knowledge of Data Cleansing/Cleansing Burndown is Must.
Must Understand Fundamentals of Data Migration to Build Data Plan for Project.
LSMW, Winshuttle experience to Load Data.
Data Engineer
Senior Data Scientist Job 33 miles from Massapequa
We're seeking a skilled Data Engineer for a 12-month contract role, ideal for someone with a solid foundation in data analytics and engineering. This role involves working closely with both business and IT teams, utilizing data to generate valuable insights that support critical decision-making. If you have a few years of experience with cloud data warehouses, SQL, and Python, and enjoy collaborative work, we'd love to meet you!
Responsibilities:
Design, develop, and maintain data models and pipelines within a cloud data warehouse (preferably Snowflake).
Conduct data analysis to extract insights and support business decisions.
Collaborate with business stakeholders and IT to understand requirements and fulfill data needs.
Write efficient SQL queries to manage and analyze large datasets.
Develop Python scripts for data processing and automation.
Requirements:
Experience: 2-3 years in a data analyst or data engineering role.
Technical Skills: Proficiency in SQL and Python.
Cloud Data Warehouse Experience: Experience with a cloud data warehouse is required, ideally Snowflake.
Interpersonal Skills: Strong communication skills with the ability to interact effectively with both business and IT teams.
Our Vetting Process
At Emergent Staffing, we work hard to find the individual who is the right fit for our clients. Here are the steps of our vetting process for this position:
Application (5 minutes)
Online Assessment & Short Algorithm Challenge (40-60 minutes)
Initial Phone Interview (30-45 minutes)
3 Interviews with the Client
Job Offer!
Job Type: Full-time
Benefits:
401(k)
401(k) matching
Dental insurance
Flexible spending account
Health insurance
Health savings account
Life insurance
Retirement plan
Vision insurance
Schedule:
8 hour shift
Monday to Friday
Data Engineer on W2
Senior Data Scientist Job 25 miles from Massapequa
****Please note that only GCs, GC-EADs and US citizens are eligible to apply****
- Data Engineer
Duration - 6 month contract
Job Specification - We are seeking a highly skilled Data Engineer with previous hands-on experience in the Financial or Banking industry, capable of hitting the ground running and quickly building scalable data solutions. The ideal candidate will have a strong background in Azure Cloud and will work within a cross-functional Agile team of 15 people. This role requires someone who can collaborate closely with stakeholders to understand business requirements, particularly for capturing revenue data for brokers. The engineer will also work with compliance and security teams to ensure all systems meet regulatory standards. They will be responsible for building a flexible data capture system while maintaining BAU (Business as Usual) functionality. This is a dynamic, fast-paced environment where the ability to deliver high-quality solutions quickly is critical.
Job Details
Here is the job requirement for a Data Engineer position with expertise in Python ETL and Azure experience. We are seeking a highly skilled and motivated individual who can contribute to our team and help drive our data engineering initiatives.
Job Requirements
1. Python ETL: The ideal candidate should have strong proficiency in Python programming language and experience in Extract, Transform, Load (ETL) processes. They should be able to design and develop efficient ETL workflows to extract data from various sources, transform it as per business requirements, and load it into target systems.
2. Azure Experience: The candidate should have hands-on experience working with Microsoft Azure cloud platform. They should be familiar with Azure services and tools, and have a good understanding of Azure architecture and best practices.
3. Azure Data Factory, DataBricks, Azure Storage, and Azure VM: The candidate should have practical experience with Azure Data Factory, DataBricks, Azure Storage, and Azure Virtual Machines. They should be able to design and implement data pipelines using Azure Data Factory, perform data transformations and analytics using DataBricks, and manage data storage and virtual machines in Azure.
4. Data Governance, Data Quality, and Controls: The candidate should have a strong understanding of data governance principles, data quality management, and data controls. They should be able to implement data governance frameworks, establish data quality standards, and ensure compliance with data regulations and policies.
5. Implementing alerts and notifications for batch jobs: The candidate should have experience in setting up alerts and notifications for batch jobs. They should be able to configure monitoring and alerting mechanisms to ensure timely identification and resolution of issues in batch job execution
Senior Data Engineer (front office) - global commodities trading firm - Up to $170k base
Senior Data Scientist Job 27 miles from Massapequa
Interested in joining a tech-centric global commodities trading firm?
You'd be joining a very small and tight-knit data engineering team of 5 , working side-by-side with world class investment professionals and data scientists in the front office. From data architecture design, to data ingestion pipeline optimization and management, your main focus will be on anything and everything data-related within the business.
Recognized globally as one of the top companies in the commodities trading space, this firm is extremely tech-driven and have placed technology at the heart of their business. In recent years, they've been putting an increased amount of emphasis on data science. They've drastically ramped up their presence generative AI space as well.
To give you a cleaner visualization of some of the projects you could be doing, the team has just finished on a project that tracked cargo vessel's raw data shipping locations (over 300 million records) in specified regions that enabled the data scientists to build a Supply & Demand Machine Learning model using time series data.
Upon joining, you'd jump straight into a massive data migration project, loading on-prem Oracle into Snowflake data warehouses. You'd be designing data ingestion pipelines from scratch, and using lots of Python and SQL to do so. You'd get to work extensively with ETL frameworks, writing pipelines to load millions of records, and building dashboards with business intelligence tools like Power BI and Tableau.
Additionally, you'll have a substantial advantage amongst other applicants if you have any experience with AI or ML technologies.
This is a hybrid position where you'll be required to work 3x per week at their headquarters in Stamford, Connecticut.
Looking for your next fast-paced and fundamentally data focused role? Apply today.
Senior Data Engineer - Snowflake/SQL/Python
Senior Data Scientist Job 27 miles from Massapequa
We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will possess a strong background in data architecture, management, and integration, particularly within the energy commodities or financial services sectors. This role will involve designing robust data solutions, ensuring data quality, and implementing ETL processes to support our business objectives.
Key Responsibilities:
Oversee the handling of large and complex data sets, ensuring data quality and reliability.
Bring in raw data from multiple vendors, managing a significant volume of data ingestions daily.
Design data structures that support the Data Science team in building machine learning models, with a focus on time series data.
Work primarily with Snowflake as the main database and assist in migrating data from Oracle to Snowflake.
Use SnapLogic to create and maintain efficient data extraction, transformation, and loading processes.
Write and optimize data pipelines using SQL and Python.
Partner with the Data Science team to support their data needs and analytics.
Identify opportunities to enhance data handling and workflow efficiency.
Relevant Skills and Qualifications:
Bachelor's degree in STEM; advanced degree preferred.
Proven experience in Data Engineering, particularly in managing large-scale data sets in a trading or financial services environment.
Proficiency in SQL and Python for data pipeline development.
Experience with Snowflake and Oracle databases; knowledge of data migration strategies is a plus.
Familiarity with ETL tools, preferably SnapLogic or similar platforms.
Strong analytical skills with the ability to work with complex data structures and deliver actionable insights.
Knowledge of data modeling, data architecture, and best practices in data management.
Excellent communication and collaboration skills, with the ability to work effectively in a team-oriented environment.
Senior Data Architect (AWS, Snowflake, Python)
Senior Data Scientist Job 25 miles from Massapequa
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and data, technology, and AI expertise.
Developing innovative, scalable, and reliable applications with high availability and cross-region resiliency
Hands-on experience with AWS, Snowflake, Python, DevOps, Terraform
The client is building a data platform and needs someone who understands DevOps.
Hands-on experience building cloud-native applications using AWS services and Snowflake
Strong Python development skills implementing best coding practices (experience with Pandas, threading, subprocess, SQLAlchemy, Snowflake connector, boto3, and Snowpark libraries)
Experience with CI/CD, containerization, and code versioning
Familiarity with various machine learning techniques and frameworks (e.g. TensorFlow, scikit-learn, Pytorch) is good to have
Experience with python related UI libraries such as Streamlit or Flask is a plus
Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Please get to know us at *************** and on X, Facebook, LinkedIn, and YouTube.
Also, please keep in mind that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.