DevOps Engineer (Bilingual in Mandarin)
Devops engineer job in New York, NY
Interested in learning more about this job Scroll down and find out what skills, experience and educational qualifications are needed. We are seeking a detail-oriented and experienced DevOps Engineer to lead the administration of our AWS cloud infrastructure, CI/CD pipelines, and Database environments. This role requires deep expertise in AWS (including multi-account structures, SSO, and Organizations), hands-on experience with MongoDB cluster and MySQL/Aurora administration, and strong proficiency in CI/CD using tools like TeamCity and Git. You will be responsible for automating deployments, ensuring system reliability and performance, and supporting a complex ecosystem of services and databases. The ideal candidate has a strong grasp of modern DevOps practices-including infrastructure as code, proactive monitoring, and security automation-and collaborates effectively with global teams to deliver secure, scalable, and high-performing infrastructure across all environments.
*Key Responsibilities:*
AWS Infrastructure & Identity Management:
* Working experience in AWS Organization Management, including AWS Single Sign-on, roles, and permissions
* Understand the best practice in identity, account and permission management
* Optimize AWS resource usage and implement cost-saving measures through tagging, lifecycle policies, and instance type adjustments.
*Advanced AWS Networking & Security:*
* Deep understanding and working operational experience with common network components, including but not limited to AWS CloudFront, API Gateway, AWS Loadbalancer, and firewalls.
* Working experience in VPC configuration, deep understanding on VPC related securities
* Ability to troubleshoot network related issues.
*Infrastructure as Code*
* Working experience in managing large infrastructure through Terraform in AWS environment
*MongoDB/MySql/Aurora Database Management:*
* Manage and optimize database clusters.
* Perform upgrades, backups, replication setup, performance tuning, and TLS configuration.
* Coordinate cross-environment database migrations and health monitoring using MongoDB
*Ops Manager and AWS tools.*
* Database access control and permission management
* Database query optimization
*CI/CD & Automation:*
* Design, build, and maintain pipelines using Bitbucket Pipelines and TeamCity.
* Automate build/test/deploy processes with rollback capabilities and health checks.
*Monitoring & Observability:*
* Set up comprehensive system and application monitoring using CloudWatch, and Uptime Kuma.
* Implement log aggregation and alerting for AWS services, MongoDB, and deployed applications.
*Security & Compliance:*
* Implement and enforce TLS/SSL configurations to meet PCI-DSS and internal compliance standards.
* Conduct vulnerability scans and work with cybersecurity teams to close findings.
* Maintain IAM roles, access policies, and audit trails for security reviews.
*Collaboration & Support:*
* Work closely with development, QA, and global infrastructure teams.
* Provide documentation and onboarding for systems, pipelines, and recovery procedures.
* Participate in on-call rotations and lead incident response efforts.
*Hybrid Schedule:* onsite 3 days per week from Tuesday to Thursday.
*Required Qualifications:*
* 5+ years in DevOps, Cloud Engineering, or SRE roles.
* Deep expertise with AWS, including SSO, Organizations, EC2, IAM, S3, and multi-account management.
* Strong hands-on experience with CloudFront, API Gateway, ALB, NLB, and WAF.
* Proven MongoDB cluster management experience (EC2-based and Atlas).
* Proven SQL database administration, including MySQL and Postgres DB
* Proficient in CI/CD workflows with TeamCity and Bitbucket Pipelines.
* Skilled in Linux, Docker, and scripting languages (Bash, Python, Node.js).
* Monitoring experience with CloudWatch, Datadog, and Uptime Kuma.
* Infrastructure-as-Code knowledge using Terraform or CloudFormation.
* Experience managing TLS certificates, DNS, and secure network routing.
* Strong documentation and collaboration skills across distributed teams. xevrcyc
* Ability to communicate in Mandarin Chinese.
Job Type: Full-time
Pay: $125,000.00 - $165,000.00 per year
Benefits:
* 401(k)
* Dental insurance
* Health insurance
* Paid time off
* Vision insurance
Application Question(s):
* Will you now or in the future require sponsorship(H1-B, etc) to work in the US?
Experience:
* AWS: 3 years (Preferred)
* Cloud infrastructure: 3 years (Preferred)
* CI/CD: 3 years (Preferred)
Language:
* Mandarin (Required)
Ability to Commute:
* New York, NY 10016 (Required)
Ability to Relocate:
* New York, NY 10016: Relocate before starting work (Required)
Work Location: Hybrid remote in New York, NY 10016
Senior ML Ops Engineer
Devops engineer job in New York, NY
About Alaffia & Our Mission
Each year, the U.S. healthcare system suffers from over $500B in wasted spending due to medical billing fraud, waste, and administrative burden. At Alaffia, we're committed to changing that paradigm. We've assembled a team of clinicians, AI engineers, and product experts to build advanced AI solutions that will directly bend the cost curve for all patients across the healthcare ecosystem. Collectively, we're building best-in-class AI software to provide our customers with co-pilot tools, AI agents, and other cutting-edge solutions to reduce administrative burden and reduce healthcare costs.
We're a high-growth, venture-backed startup based in NYC and are actively scaling our company.
About the Role & What You'll Be Doing
Alaffia is a healthcare AI startup revolutionizing health and data automation. Our AI-driven platform leverages state-of-the-art generative AI and machine learning technologies to enhance accuracy, efficiency, and compliance in medical billing and auditing. As we scale, we are seeking a Senior ML Ops Engineer to build the cutting-edge AI solutions, drive innovation, and shape the future of healthcare automation.
At Alaffia, AI is at the core of our mission. We are seeking an experienced engineer who is passionate about deploying scalable, safe, and regulatory-compliant AI-driven systems. Our AI technology powers intelligent automation for medical billing, ensuring accuracy and operational efficiency. We seek someone who thrives on building large-scale AI systems that enhance workflow efficiency, while also prioritizing all the necessary safety guardrails for responsible AI. You will have the opportunity to orchestrate various AI agents with an optimized system design that integrates AI platforms, data storage, and human-in-the-loop feedback. In this role, you'll be shaping the future of AI-driven healthcare automation while tackling some of the most significant challenges in AI deployment and monitoring.
Your Responsibilities
Deploy NLP, OCR, and multi-modal AI products on secure cloud environments.
Design AI system, focusing on pipeline architecture and tooling to ensure scalability, observability, performance, latency and fault tolerance requirements
Design data schema and develop ETL processes to integrate data and human annotation with AI model tuning and benchmarking pipelines.
Create best practices for data and AI experiment management
Write highly robust, scalable code that is flexible, reusable, and adaptable to evolving requirements.
Ensure high code quality through rigorous code review processes and foster a collaborative engineering culture.
Build and leverage AI tools to improve developer efficiency and alignment across teams.
Proactively identify, resolve, and mitigate technical risks before deployments and releases.
What We're Looking For:
8+ years of technical experience, with at least 4+ years in a dedicated software engineering role
Strong background in data modeling, versioning, and storage for AI data and annotation
Recent development experience of scalable enterprise AI products
Proficient in multiple AI frameworks, for example, MLFLow, LangChain, LangFuse, CrewAI, Weights & Bias
Firm understanding of AI software development and quality assurance procedures
Working knowledge and design skills across a wide array of databases
Experience with AI experiment tracking, monitoring, and comparison
Demonstrated ability to stay up to date with the latest AI methodologies and systems.
Exceptional problem-solving skills and the ability to work in a fast-paced, evolving environment.
Excellent communication and collaboration skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
Our Culture
At Alaffia, we fundamentally believe that the whole is more valuable than the sum of its individual parts. Further to that point, we believe a diverse team of individuals with various backgrounds, ideologies, and types of training generates the most value. Our people are entrepreneurial by nature, problem solvers, and are passionate about what they do - both inside and outside of the office.
What Else Do You Get Working With Us?
Competitive compensation package (cash + equity)
Medical, Dental and Vision benefits
Flexible, paid vacation policy
Work in a flat organizational structure - direct access to Leadership
AI Engineer
Devops engineer job in New York, NY
AI Engineer - Healthcare Automation Platform
Well-Funded Startup | Healthcare AI | Hybrid (NYC preferred) or Remote
About Us
We're building an AI-powered automation platform that streamlines critical workflows in healthcare operations. Our system processes complex, unstructured data to ensure time-sensitive information gets where it needs to go-reducing delays and improving operational efficiency for healthcare providers.
We're in production with paying enterprise customers and experiencing rapid growth.
The Role
We're looking for a high-agency AI Engineer to bridge the gap between cutting-edge ML research and real-world product delivery. You'll design and build agentic workflows that automate complex operational processes, combining LLMs, vision models, and structured automation to solve challenging infrastructure and workflow problems.
This role involves creating pipelines, evaluation harnesses, and scalable production-grade agents. You'll research and implement the best-fit technology for each workflow, working across the full stack from data collection to orchestration to frontend integration.
What You'll Build
Ship full-stack AI systems end-to-end-from prototype to production
Build observability and debugging tools to capture model performance, user feedback, and edge cases
Go from ideation to working code within hours; iterate rapidly on experiments and data
Design agentic workflows powered by LLMs and vision models for document understanding
Create evaluation frameworks to test AI system performance beyond raw model accuracy
Work directly with cross-functional teams (ML, Sales, Customer Success) to build AI solutions for diverse use cases
What We're Looking For
Full-stack engineering experience with web frameworks, backend systems, and cloud infrastructure
Proven track record of building, testing, deploying, scaling, and monitoring LLM-centered software architectures
Hands-on expertise with LLM APIs and production AI system deployment
Understanding of how to evaluate AI systems holistically-beyond model accuracy alone
Strong communication skills-ability to write clear technical documentation and explain complex systems
Bonus: Experience in healthcare or working with unstructured documents
Why Join Us?
Drive Impact: High-agency culture where you set the pace and see direct results
Own Your Work: End-to-end ownership from research to production deployment
Innovate with Purpose: Join a high-caliber team solving real problems at scale
Competitive Package: $200K-$240K + equity + comprehensive benefits
Great Perks: Unlimited PTO, 100% paid health benefits, 401(k) match, catered lunch, snacks
Location: NYC office 4 days/week preferred (Chelsea), remote considered for exceptional candidates
Desired Skills and Experience
AI Engineer - Healthcare Automation Platform
Well-Funded Startup | Healthcare AI | Hybrid (NYC preferred) or Remote
About Us
We're building an AI-powered automation platform that streamlines critical workflows in healthcare operations. Our system processes complex, unstructured data to ensure time-sensitive information gets where it needs to go-reducing delays and improving operational efficiency for healthcare providers.
We're in production with paying enterprise customers and experiencing rapid growth.
The Role
We're looking for a high-agency AI Engineer to bridge the gap between cutting-edge ML research and real-world product delivery. You'll design and build agentic workflows that automate complex operational processes, combining LLMs, vision models, and structured automation to solve challenging infrastructure and workflow problems.
This role involves creating pipelines, evaluation harnesses, and scalable production-grade agents. You'll research and implement the best-fit technology for each workflow, working across the full stack from data collection to orchestration to frontend integration.
What You'll Build
Ship full-stack AI systems end-to-end-from prototype to production
Build observability and debugging tools to capture model performance, user feedback, and edge cases
Go from ideation to working code within hours; iterate rapidly on experiments and data
Design agentic workflows powered by LLMs and vision models for document understanding
Create evaluation frameworks to test AI system performance beyond raw model accuracy
Work directly with cross-functional teams (ML, Sales, Customer Success) to build AI solutions for diverse use cases
What We're Looking For
Full-stack engineering experience with web frameworks, backend systems, and cloud infrastructure
Proven track record of building, testing, deploying, scaling, and monitoring LLM-centered software architectures
Hands-on expertise with LLM APIs and production AI system deployment
Understanding of how to evaluate AI systems holistically-beyond model accuracy alone
Strong communication skills-ability to write clear technical documentation and explain complex systems
Bonus: Experience in healthcare or working with unstructured documents
Why Join Us?
Drive Impact: High-agency culture where you set the pace and see direct results
Own Your Work: End-to-end ownership from research to production deployment
Innovate with Purpose: Join a high-caliber team solving real problems at scale
Competitive Package: $200K-$240K + equity + comprehensive benefits
Great Perks: Unlimited PTO, 100% paid health benefits, 401(k) match, catered lunch, snacks
Location: NYC office 4 days/week preferred (Chelsea), remote considered for exceptional candidates
Oscar Associates Limited (US) is acting as an Employment Agency in relation to this vacancy.
Staff Software Engineer
Devops engineer job in New York, NY
Who We Are
At City Storage Systems (CSS), we are dedicated to building Infrastructure for Better Food. Our mission is to empower restaurateurs worldwide to thrive in the online food delivery market. By making food more affordable, of higher quality, and convenient, we're transforming the industry for everyone, from budding entrepreneurs opening their first restaurant to global quick-service chains.
What You'll Do
As a backend-focused Software Engineer at CSS, you'll play a crucial role in our data-driven development team, helping to advance our state-of-the-art menu platform. Your responsibilities will include:
Data-Driven Development: Contribute to our data-centric development efforts.
Project Planning: Participate in strategic planning for various internal tools.
Agile Methodologies: Implement and test software using agile methodologies.
Collaborative Teamwork: Work closely with a team to enhance and support our technology.
Code Contribution: Write, debug, maintain, and test code across multiple projects.
Architectural Design: Design scalable systems with a focus on robust architecture.
Continuous Improvement: Engage in continuous improvement initiatives.
Innovation: Drive innovation within the team and support technological advancements at CSS.
What the Team Focuses On
Our menu platform (check our tech blog) offers comprehensive menu management features designed to streamline restaurant operations, enhance customer experiences, and optimize performance. It serves as a single source of truth for menus, seamlessly integrating with online channels such as DoorDash, UberEats, and Grubhub and offline point-of-sale (POS) systems like Square, Toast, and NCR.
Key capabilities include updating menus with new items, pricing, and taxes, performing A/B testing on different structures, setting availability by channel, creating combos and promotions, managing ingredients and SKUs, and configuring operational hours. Additionally, our platform features automated linking to ensure POS and online menus are always synchronized, minimizing discrepancies.
Boasting a 99.9% availability rate, our platform supports a vast network of brands in the US and worldwide, ensuring uninterrupted service. Over 100,000 restaurateurs use our platform daily to streamline their operations and consistently express high satisfaction.
What We're Looking For
Education: Bachelor's Degree in Computer Science or equivalent.
Experience: 7-10 years of experience in a relevant role.
Individual Contribution: Proven track record of significant contributions in previous roles, demonstrating your impact.
Architectural Skills: Ability to design and create robust architecture from scratch and evolve existing systems.
Communication Skills: Strong communication and presentation skills, with the ability to collaborate with non-engineering stakeholders.
Technical Expertise: Experience designing and implementing scalable, reliable, and efficient distributed systems. Familiarity with Java / Go / Kotlin is required.
Concurrency: Experience building systems that can execute multiple tasks while managing overlapping run-time and space complexities simultaneously.
Application Maintenance: Experience in maintaining and extending large-scale, high-traffic applications.
Why Join Us
Growing Market: You'll be part of an $80 billion market projected to reach at least $500 billion by 2030 in the US alone.
Industry Impact: Join a team that is transforming the restaurant industry and helping restaurants succeed in online food delivery.
Collaborative Environment: Benefit from the support and guidance of experienced colleagues and managers, who will help you learn, grow, and achieve your goals. Work closely with other teams to ensure our customers' success.
Additional Information
This role is based in our Mountain View office. We look forward to sharing more about a meaningful career at CSS!
GTM Engineer
Devops engineer job in New York, NY
About us:
Camber builds software to improve the quality and accessibility of healthcare. We streamline and replace manual work so clinicians can focus on what they do best: providing great care. For more details on our thesis, check out our write-up: What is Camber?
We've raised $50M in funding from phenomenal supporters at a16z, Craft Ventures, YCombinator, Manresa, and many others who are committed to improving the accessibility of care. For more information, take a look at: Announcing Camber
About our Culture:
Our mission to change behavioral health starts with us and how we operate. We don't want to just change behavioral health, we want to change the way startups operate. Here are a few tactical examples:
1) Improving accessibility and quality of healthcare is something we live and breathe. Everyone on Camber's team cares deeply about helping clinicians and patients.
2) We have to have a sense of humor. Healthcare is so broken, it's depressing if you don't laugh with us.
About the role:
We're seeking a proactive, tech-savvy sales operations professional with a startup mindset-someone who thrives on breaking growth barriers and enabling sales excellence. This person will be both a systems admin and a strategic partner: ensuring HubSpot and our tech stack are humming, while also helping shape compensation, territories, and GTM expansion.
What you'll do:
Systems & CRM Administration
Manage and optimize current CRM (HubSpot) and other tech stack integrations: build workflows, dashboards, and troubleshoot system issues
Support onboarding/offboarding of users, governance, data hygiene, and adoption
Data, Forecasting & Reporting
Design and maintain dashboards, reports, and metrics that drive decision-making (e.g., pipeline health, forecast accuracy, win rates)
Deliver actionable insights to stakeholders across sales leadership
Compensation & Territory Strategy
Assist in designing incentive and quota plans that align with sales goals
Collaborate on territory definition, alignment, and carve strategy to ensure balanced coverage
Process & Cross-Functional Enablement
Streamline sales workflows and sales-marketing-sales handoffs
Partner across teams-Sales, Marketing, Finance-to ensure operational alignment and seamless execution
Strategic & Tactical Execution
Be hands-on when needed (data crunching, HubSpot tweaks) while contributing to broader sales strategy planning
What we're looking for:
2-4 years in startup, sales operations, or Rev-Ops environment (or similar roles)
CRM administration experience-ideally HubSpot; bonus if familiar with other tools and workflows
Strong analytical skills-coding, Excel, BI, sales forecasting, data modeling
Operational rigor and problem-solving mindset
A strategic thinker who can scale systems and structure
Thrives in growth-stage constraints; comfortable wearing multiple hats and moving quickly
Perks & Benefits at Camber:
Comprehensive Health Coverage: Medical, dental, and vision plans with nationwide coverage, including 24/7 virtual urgent care.
Mental Health Support: Weekly therapy reimbursement up to $100, so you can prioritize the care that works best for you.
Paid Parental Leave: Up to 12 weeks of fully paid time off for new parents (
birth, adoption, or foster care)
.
Financial Wellness: 401K (traditional & Roth), HSA & FSA options, and monthly commuter benefits for NYC employees.
Time Off That Counts: 18 PTO days per year
(plus rollover
), plus office closures for holidays, monthly team events, company off-sites, and daily, in-office lunches for our team.
Fitness Stipend: $100/month to use on fitness however you choose.
Hybrid Flexibility: In NYC? We gather in the office 3-5x/week, with flexibility when life happens. Fridays are remote-friendly.
Camber is based in New York City, and we prioritize in-person and hybrid candidates.
Building an inclusive culture is one of our core tenets as a company. We're very aware of structural inequalities that exist, and recognize that underrepresented minorities are less likely to apply for a role if they don't think they meet all of the requirements. If that's you and you're reading this, we'd like to encourage you to apply regardless - we'd love to get to know you and see if there's a place for you here!
In addition, we take security seriously, and all of our employees contribute to uphold security requirements and maintain compliance with HIPAA security regulations.
Data Engineer
Devops engineer job in New York, NY
Mercor is hiring a Data Engineer on behalf of a leading AI lab. In this role, you'll **design resilient ETL/ELT pipelines and data contracts** to ensure datasets are analytics- and ML-ready. You'll validate, enrich, and serve data with strong schema and versioning discipline, building the backbone that powers AI research and production systems. This position is ideal for candidates who love working with data pipelines, distributed processing, and ensuring data quality at scale.
* * * ### **You're a great fit if you:** - Have a background in **computer science, data engineering, or information systems**. - Are proficient in **Python, pandas, and SQL**. - Have hands-on experience with **databases** like PostgreSQL or SQLite. - Understand distributed data processing with **Spark or DuckDB**. - Are experienced in orchestrating workflows with **Airflow** or similar tools. - Work comfortably with common formats like **JSON, CSV, and Parquet**. - Care about **schema design, data contracts, and version control** with Git. - Are passionate about building pipelines that enable **reliable analytics and ML workflows**. * * * ### **Primary Goal of This Role** To design, validate, and maintain scalable ETL/ELT pipelines and data contracts that produce clean, reliable, and reproducible datasets for analytics and machine learning systems. * * * ### **What You'll Do** - Build and maintain **ETL/ELT pipelines** with a focus on scalability and resilience. - Validate and enrich datasets to ensure they're **analytics- and ML-ready**. - Manage **schemas, versioning, and data contracts** to maintain consistency. - Work with **PostgreSQL/SQLite, Spark/Duck DB, and Airflow** to manage workflows. - Optimize pipelines for performance and reliability using **Python and pandas**. - Collaborate with researchers and engineers to ensure data pipelines align with product and research needs. * * * ### **Why This Role Is Exciting** - You'll create the **data backbone** that powers cutting-edge AI research and applications. - You'll work with modern **data infrastructure and orchestration tools**. - You'll ensure **reproducibility and reliability** in high-stakes data workflows. - You'll operate at the **intersection of data engineering, AI, and scalable systems**. * * * ### **Pay & Work Structure** - You'll be classified as an hourly contractor to Mercor. - Paid weekly via Stripe Connect, based on hours logged. - Part-time (20-30 hrs/week) with flexible hours-work from anywhere, on your schedule. - Weekly Bonus of **$500-$1000 USD** per 5 tasks. - Remote and flexible working style.
Java Software Engineer
Devops engineer job in New York, NY
BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; we are looking for candidates with a strong background in Software Engineering or Computer Science for a Java/Software Developer position.
Responsibilities:
● Develop software and web applications using Java 8/J2EE/Java EE (and higher), React.js,Angular2+, SQL, Spring, HTML5, CSS, JavaScript and TypeScript among other tools;
● Write scalable, secure, maintainable code that powers our clients' platforms;
● Create, deploy, and maintain automated system tests;
● Work with Testers to understand defects opened and resolves them in a timely manner;
● Support continuous improvement by investigating alternatives and technologies and
presenting these for architectural review;
● Collaborate effectively with other team members to accomplish shared user story
and sprint goals;
Basic Qualifications:
● Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) an understanding of the software development life cycle;
● Basic programming skills using object-oriented programming (OOP) languages with in-depth knowledge of common APIs and data structures like Collections, Maps, lists, Sets etc.
● Knowledge of relational databases (e.g. SQL Server, Oracle) basic SQL query language skills
Preferred Qualifications:
● Master's Degree in Computer Science (CS)
● 0-1 year of practical experience in Java coding
● Experience using Spring, Maven and Angular frameworks, HTML, CSS
● Knowledge with other contemporary Java technologies (e.g. Weblogic, RabbitMQ,
Tomcat, etc.) · Knowledge of JSP, J2EE, and JDBC
·
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Data Engineer
Devops engineer job in New Providence, NJ
Job Title: Senior Data Engineer (Python & Snowflake, SQL)
Employment Type: Contract
Sr. Data Engineer (Python, Snowflake, SQL)
The developer should have strong Python, Snowflake, SQL coding skills.
The developer should be able to articulate few real time experience scenarios and should have a good aptitude to show case solutions for real life problems in Snowflake and Python.
The developer should be able to write code in Python for some intermediate level problems given during the L1 assessment.
Lead qualities to be able to guide a team and to own the end to end support of the project.
Around 8 years' experience as Snowflake Developer on design and development of data solutions within the Snowflake Data Cloud, leveraging its cloud-based data warehousing capabilities. Responsible for designing and implementing data pipelines, data models, and ETL processes, ensuring efficient and effective data storage, processing, and analysis.
Able to write Complex SQL Queries, Write Python Stored Procedure code in Snowflake
Job Description Summary:
Data Modelling and Schema Design:
Create and maintain well-structured data models and schemas within Snowflake, ensuring data integrity and efficient query performance.
ETL/ELT Development:
Design and implement ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to load data into Snowflake from various sources.
Data Pipeline Management:
Build and optimize data pipelines to ingest data into Snowflake, ensuring accurate and timely data flow.
SQL Optimization:
Write and optimize SQL queries to enhance performance and efficiency within Snowflake.
Performance Tuning:
Identify and address performance bottlenecks within Snowflake, optimizing query execution and resource allocation.
Security and Governance:
Implement data security and governance best practices within Snowflake environments, including access control and encryption.
Documentation and Maintenance:
Maintain documentation for data models, data pipelines, and other Snowflake solutions.
Troubleshooting and Support:
Troubleshoot and resolve issues within Snowflake, providing technical support to users.
Collaboration:
Collaborate with data architects, data engineers, and business users to understand requirements and deliver solutions
Other Skills:
Experience with data warehousing concepts and data modelling.
Hands-on experience in creating stored procedures, functions, tables, cursors.
Experience in database testing, data comparison, and data transformation scripting.
Capable of troubleshooting common database issues
Hands on experience in Gitlab with understanding of CI/CD Pipeline, DevOps tools
Knowledge on AWS Lambda and Azure Functions
Data Engineer Manager
Devops engineer job in New York, NY
Be part of a global consulting powerhouse, partnering with clients on their most critical strategic transformations.
We are Wavestone. Energetic, solution-driven experts who focus as much on people as on performance and growth. Hand in hand, we share a deep desire to make a positive impact. We are an ambitious firm with a worldwide reach and an ever-expanding portfolio of clients, topics, and projects. In North America, Wavestone operates from hubs in New York City, Pittsburgh, Dallas and Toronto. We work closely with CEOs and technology leaders to optimize IT strategy, sourcing models, and business processes and are committed to building lasting partnerships with our clients.
Are you a true team player, living strong values? Are you a passionate learner, aiming to grow every day? Are you a driven go-getter, tackling challenges head-on? Then we could be the right fit for you. Join Wavestone and thrive in an environment that's empowering, collaborative, and full of opportunities to turn today's challenges into tomorrow's solutions - contributing to one or more of our core 4 capabilities:
Business Consulting | Business Strategy & Transformation, Organizational Effectiveness & Change Management, Operating Model Design & Agility, Program Leadership & Project Management, Marketing, Innovation, & Customer Experience
Technology Consulting | IT Strategy & CTO Advisory, Technology Delivery, Data & Artificial Intelligence, Software & Application: Development & Integration, SAP Consulting, Insurance/Reinsurance
Cybersecurity | Cyber Transformation Remediation, Cyber Defense & Recovery, Digital Identity, Audit & Incident Response, Product & Industrial Cybersecurity
Sourcing & Service Optimization | Global Services Strategy, IT & Business Process Services Outsourcing, Global In-House Center Support, Services Optimization, Sourcing Program Management
Read more at *****************
Job Description
As a Data Engineer at a manager level at Wavestone, you will be expected to help address strategic as well as detailed client needs, specifically serving as a trusted advisor to C-level executives and be comfortable supporting and leading hands-on data projects with technical teams.
In this role you would be leading or supporting high-impact data transformation, data modernization and data initiatives to accelerate and enable AI solutions, bridging business strategy and technical execution. You will architect and deliver robust, scalable data solutions, while mentoring teams and helping to shape the firm's data consulting offerings and skills. This role requires a unique blend of strategic vision, technical depth, and consulting leadership.
Key Responsibilities
Lead complex client engagements in data engineering, analytics, and digital transformation, from strategy through hands-on implementation.
Advise C-level and senior stakeholders on data strategy, architecture, governance, and technology adoption to drive measurable business value.
Architect and implement enterprise-scale data platforms, pipelines, and cloud-native solutions (Azure, AWS, Snowflake, Databricks, etc.).
Oversee and optimize ETL/ELT processes, data integration, and data quality frameworks for large, complex organizations.
Translate business objectives into actionable technical road maps, balancing innovation, scalability, and operational excellence.
Mentor and develop consultants and client teams, fostering a culture of technical excellence, continuous learning, and high performance.
Drive business development by shaping proposals, leading client pitches, and contributing to thought leadership and market offerings.
Stay at the forefront of emerging technologies and industry trends in data engineering, AI/ML, and cloud platforms.
Key Competencies & Skills
Strategic Data Leadership: Proven ability to set and execute data strategy, governance, and architecture at the enterprise level.
Advanced Data Engineering: Deep hands-on experience designing, building, and optimizing data pipelines and architectures (Python, SQL, Spark, Databricks, Snowflake, Azure, AWS, etc.).
Designing Data Models: Experience creating conceptual, logical, and physical data models that leverage different data modeling concepts and methodologies (normalization/denormalization, dimensional typing, data vault methodology, partitioning/embedding strategies, etc.) to meet solution requirements.
Cloud Data Platforms: Expertise in architecting and deploying solutions on leading cloud platforms (Azure, AWS, GCP, Snowflake).
Data Governance & Quality: Mastery of data management, MDM, data quality, and regulatory compliance (e.g., IFRS17, GDPR).
Analytics & AI Enablement: Experience enabling advanced analytics, BI, and AI/ML initiatives in complex environments.
Executive Stakeholder Management: Ability to communicate and influence at the C-suite and senior leadership level.
Project & Team Leadership: Demonstrated success managing project delivery, budgets, and cross-functional teams in a consulting context.
Continuous Learning & Innovation: Commitment to staying ahead of industry trends and fostering innovation within teams.
Qualifications
Bachelor's or master's degree in Computer Science, Engineering, Data Science, or related field, or equivalent business experience.
8+ years of experience in data engineering, data architecture, or analytics consulting, with at least 2 years in a leadership or management role.
Demonstrated success in client-facing roles, ideally within a consulting or professional services environment.
Advanced proficiency in Python, SQL, and modern data engineering tools (e.g., Spark, Databricks, Airflow).
Experience with cloud data platforms (Azure, AWS, GCP, Snowflake).
Relevant certifications (e.g., AWS Certified Data Analytics, Azure Data Engineer, Databricks, Snowflake) are a strong plus.
Exceptional problem-solving, analytical, and communication skills.
Industry exposure: Deep experience in Insurance, Pharma, or Financial Services
Additional Information
Salary Range : $157k - $200k annual salary
We are recruiting across several levels of seniority from Senior Consultant to Manager.
*Only candidates legally authorized to work for any employer in the U.S on a full time basis without the need for sponsorship will be considered. We are unable to sponsor or take over sponsorship of an employment Visa at this time.
Our Commitment
Wavestone values and Positive Way
At Wavestone, we believe our employees are our greatest ambassadors. By embodying our shared values, vision, mission, and corporate brand, you'll become a powerful force for positive change. We are united by a shared commitment to making a positive impact, no matter where we are. This is better defined by our value base, "The Positive Way," which serves as the glue that binds us together:
Energetic - A positive attitude gives energy to lead projects to success. While we may not control the circumstances, we can always choose how we respond to them.
Responsible - We act with integrity and take ownership of our decisions and actions, considering their impact around us.
Together - We want to be a great team, not a team of greats. The team's strength is each individual member, each member's strength is the team.
We are Energetic, Responsible and Together!
Benefits
25 PTO / 6 Federal Holidays / 4 Floating Holidays
Great parental leave (birthing parent: 4 months | supporting parent: 2 months)
Medical / Dental / Vision coverage
401K Savings Plan with Company Match
HSA/FSA
Up to 4% bonus based on personal and company performance with room to grow as you progress in your career
Regular Compensation increases based on performance
Employee Stock Options Plan (ESPP)
Travel and Location
This full-time position is based in our New York office. You must reside or be willing to relocate within commutable distance to the office.
Travel requirements tend to fluctuate depends on your projects and client needs
Diversity and Inclusion
Wavestone seeks diversity among our team members and is an Equal Opportunity Employer.
At Wavestone, we celebrate diversity and inclusion. We have a strong global CSR agenda and an active Diversity & Inclusion committee with Gender Equality, LGBTQ+, Disability Inclusion and Anti-Racism networks.
If you need flexibility, assistance, or an adjustment to our recruitment process due to a disability or impairment, you may reach out to us to discuss this.
Feel free to visit our Wavestone website and LinkedIn page to see our most trending insights!!
Lead Data Engineer (Marketing Technology)
Devops engineer job in New York, NY
required
)
About the job:
We're seeking a Lead Data Engineer to drive innovation and excellence across our Marketing Technology data ecosystem. You thrive in dynamic, fast-paced environments and are comfortable navigating both legacy systems and modern data architectures. You balance long-term strategic planning with short-term urgency, responding to challenges with clarity, speed, and purpose.
You take initiative, quickly familiarize yourself with source systems, ingestion pipelines, and operational processes, and integrate seamlessly into agile work rhythms. Above all, you bring a solution-oriented, win-win mindset-owning outcomes and driving progress.
What you will do at Sogeti:
Rapidly onboard into our Martech data ecosystem-understanding source systems, ingestion flows, and operational processes.
Build and maintain scalable data pipelines across Martech, Loyalty, and Engineering teams.
Balance long-term projects with short-term reactive tasks, including urgent bug fixes and business-critical issues.
Identify gaps in data infrastructure or workflows and proactively propose and implement solutions.
Collaborate with product managers, analysts, and data scientists to ensure data availability and quality.
Participate in agile ceremonies and contribute to backlog grooming, sprint planning, and team reviews.
What you will bring:
7+ years of experience in data engineering, with a strong foundation in ETL design, cloud platforms, and real-time data processing.
Deep expertise in Snowflake, Airflow, dbt, Fivetran, AWS S3, Lambda, Python, SQL.
Previous experience integrating data from multiple retail and ecommerce source systems.
Experience with implementation and data management for loyalty platforms, customer data platforms, marketing automation systems, and ESPs.
Deep expertise in data modeling with dbt.
Demonstrated ability to lead critical and complex platform migrations and new deployments.
Strong communication and stakeholder management skills.
Self-driven, adaptable, and proactive problem solver
Education:
Bachelor's or Master's degree in Computer Science, Software Engineering, Information Systems, Business Administration, or a related field.
Life at Sogeti: Sogeti supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work options
401(k) with 150% match up to 6%
Employee Share Ownership Plan
Medical, Prescription, Dental & Vision Insurance
Life Insurance
100% Company-Paid Mobile Phone Plan
3 Weeks PTO + 7 Paid Holidays
Paid Parental Leave
Adoption, Surrogacy & Cryopreservation Assistance
Subsidized Back-up Child/Elder Care & Tutoring
Career Planning & Coaching
$5,250 Tuition Reimbursement & 20,000+ Online Courses
Employee Resource Groups
Counseling & Support for Physical, Financial, Emotional & Spiritual Well-being
Disaster Relief Programs
About Sogeti
Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation.
Become Your Best | *************
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodation during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Please be aware that Capgemini may capture your image (video or screenshot) during the interview process and that image may be used for verification, including during the hiring and onboarding process.
Click the following link for more information on your rights as an Applicant **************************************************************************
Applicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $125,000 - $175,000.
This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Synthetic Data Engineer (Observability & DevOps)
Devops engineer job in New York, NY
About the Role: We're building a large-scale synthetic data generation engine to produce realistic observability datasets - metrics, logs, and traces - to support AI/ML training and benchmarking. You will design, implement, and scale pipelines that simulate complex production environments and emit controllable, parameterized telemetry data.
🧠 What You'll Do \t•\tDesign and implement generators for metrics (CPU, latency, throughput) and logs (structured/unstructured).
\t•\tBuild configurable pipelines to control data rate, shape, and anomaly injection.
\t•\tDevelop reproducible workload simulations and system behaviors (microservices, failures, recoveries).
\t•\tIntegrate synthetic data storage with Prometheus, ClickHouse, or Elasticsearch.
\t•\tCollaborate with ML researchers to evaluate realism and coverage of generated datasets.
\t•\tOptimize for scale and reproducibility using Docker containers.
✅ Who You Are \t•\tStrong programming skills in Python.
\t•\tFamiliarity with observability tools (Grafana, Prometheus, ELK, OpenTelemetry).
\t•\tSolid understanding of distributed systems metrics and log structures.
\t•\tExperience building data pipelines or synthetic data generators.
\t•\t(Bonus) Knowledge of anomaly detection, time-series analysis, or generative ML models.
💸 Pay $50 - 75/hr depending on experience Remote, flexible hours Project timeline: 5-6 weeks
Plumbing Engineer
Devops engineer job in New York, NY
🔎 We're Hiring: Senior Plumbing & Fire Protection Engineer / MEP Designer (On-Site - Brooklyn, NY)
Precision Design, a leading MEP Engineering firm in Brooklyn, NY, is seeking a Senior Plumbing Engineer / MEP Designer with strong experience in Plumbing, Fire Protection
We are looking for a highly skilled professional who can independently design systems, coordinate across multiple disciplines, and manage multiple projects in a fast-paced environment.
Candidates must have at least 5 years of industry experience, including a minimum of 3 years designing in NYC, and must be fully knowledgeable of NYC Building and Energy Codes.
💼 Responsibilities
Design Plumbing & Fire Protection systems from concept through full construction documents
Prepare calculations for water, gas, sanitary/sewer, and storm loads
Perform field surveys and assess existing building conditions
Produce drawings, specifications, and all phases of design (schematic → construction administration)
Coordinate with architectural, engineering, and external project teams, including contractors and city agencies
Manage multiple projects simultaneously
Review shop drawings and participate in project meetings
📘 Required Skills & Experience
5+ years of related experience in Plumbing and/or Fire Protection design
At least 3 years of NYC-specific design experience
Strong knowledge of NYC Building Codes, NYC Energy Conservation Code, and NYC filing requirements
Experience with utility company filing procedures
Proficiency in AutoCAD (Revit is a plus)
Familiarity with NFPA-13, NFPA-13R, and hydraulic calculations
Experience with DEP cross-connection and site connection submissions is strongly preferred
Excellent communication, teamwork, and interpersonal skills
Ability to work independently and manage multiple deadlines
📍 Work Location
On-site in our Brooklyn, NY office (no remote option)
Data Engineer
Devops engineer job in New York, NY
About Beauty by Imagination:
Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike.
Position Overview:
We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business.
You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse.
Key Responsibilities
Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts.
Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools.
Integrate and transform data from multiple systems, including ERP platforms such as NetSuite.
Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability.
Support and enhance Power BI dashboards and other BI/reporting systems.
Implement data quality checks, automation, and process monitoring.
Collaborate with business and analytics teams to translate requirements into scalable data solutions.
Contribute to data governance, standardization, and documentation practices.
Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools.
Required Qualifications
Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts).
Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows.
Strong SQL skills and experience with Microsoft SQL Server.
Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik).
Understanding of data modeling, performance optimization, and relational database design.
Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation.
Excellent analytical and communication skills.
Preferred Qualifications
Experience with cloud data platforms (Azure, AWS, or GCP).
Understanding of data security, governance, and compliance (GDPR, SOC2).
Experience with API integrations and real-time data ingestion.
Background in finance, supply chain, or e-commerce analytics.
Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.).
AI Focused Preferred Skills:
Experience implementing AI-driven analytics or automation inside Data Warehouses.
Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights.
Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL.
Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC).
Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design.
Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools.
Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments.
Soft Skills
Strong analytical and problem-solving mindset.
Ability to communicate complex technical concepts to business stakeholders.
Detail-oriented, organized, and self-motivated.
Collaborative team player with a growth mindset.
Impact
You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions.
Our Tech Stack
SQL Server, SSIS, Azure Synapse
Python, Airflow, Azure Data Factory
Power BI, NetSuite ERP, REST APIs
CI/CD (Azure DevOps, GitHub)
What We Offer
Location: New York, NY (Hybrid work model)
Employment Type: Full-time
Compensation: Competitive salary based on experience
Benefits: Health insurance, 401(k), paid time off
Opportunities for professional growth and participation in enterprise AI modernization initiatives
DevOps Release Engineer with Chef
Devops engineer job in Jersey City, NJ
DevOps Release Engineer Responsibilities include Deploying applications globally, coordinating in controlled environments using Chef Configuring SVN distributed platform for multiple applications and automating builds using Jenkins or other tools. Define a process for regulated SVN code control and builds.
Creating branches to support parallel development
Responsible for Code control and resolving merge conflicts
Creating branches to support parallel development
Responsible for Code control and resolving merge conflicts
Develop all necessary Unix/build scripts required for deployment automation etc.
Engage and schedule functional resources in support of deployment, implementation and verification.
Acquire final approvals from QA and the LOB for application deployments
Responsible for SIT, UAT, Prod & COB environments
Coordinate overall deployments, create deployment documents, release plans and run books
Troubleshooting application/middleware applications
Mandatory Qualifications:
Requires 7-8 years of related experience in one or more of the following areas
- Release engineer role
- Test environment support
- Knowledge of SVN/sub version or other SCM tools
- Unix/shell/Perl/Phython scripting
- Middleware support [Web Sphere (preferably), web logic etc].
- Application build tools like ANT/Maven/Jenkins
- Relational databases like SQL/Oracle(preferred)
- Knowledge of file transfer mechanism (ConnectDirect, SCP, SFTP, etc..)
Strong and proven ability to analyze and troubleshoot databases
Good knowledge on issue/problem reporting or managing systems like JIRA etc.
Advanced troubleshooting and deductive reasoning skills
Ability to converse in both technical and non-technical terms
Ability to show experience managing multiple tasks simultaneously and successfully
Experience working with geographically distributed and culturally diverse work-groups
Excellent written and verbal communication skills
Ability to develop strong client relationships
Additional Information
All your information will be kept confidential according to EEO guidelines.
Release Engineer - MetaMask
Devops engineer job in New York, NY
Consensys is the leading blockchain and web3 software company founded by Joe Lubin, CEO of Consensys and Co-Founder of Ethereum. Since 2014, Consensys has been at the forefront of innovation, pioneering technological developments within the web3 ecosystem.
Through our product suite, including the MetaMask platform, Infura, Linea, Diligence, and our NFT toolkit Phosphor, we have become the trusted collaborator for users, creators, and developers on their path to build and belong in the world they want to see.
Whether building a dapp, an NFT collection, a portfolio, or a better future, the instinct to build is universal. Consensys inspires and champions the builder instinct in everyone by making web3 universally easy to use and develop on.
Our mission is to unlock the collaborative power of communities by making the decentralized web universally easy to access, use, and build on.
You'll get to work on the tools, infrastructure, and apps that scale these platforms to onboard one billion participants and 5 million developers. You'll be constantly exposed to new concepts, ideas, and frameworks from your peers, and as you work on different projects - challenging you to stay at the top of your game. You'll join a network of builders that reaches the edge of our ecosystem. Consensys alumni have moved on to become tech entrepreneurs, CEOs, and team leads at tech companies.
About MetaMask
MetaMask aims to create a thriving engineering organization that supports the well-being of our engineers while empowering them to do work they are proud of and enjoy. We strive for an environment that gives our people high trust and autonomy, while also facilitating collaboration, communication and camaraderie among teams and teammates. We aspire to build a diverse engineering team, inclusive to people from all backgrounds and demographics. It is also of great importance to us that working at MetaMask is an experience that catalyzes career growth and learning.
What you'll do
We're looking for a Release Engineer to support and streamline the delivery of our Mobile (iOS & Android) and Extension (browser-based) applications. This is not a traditional DevOps position - it's a hands-on, team-facing role focused on ensuring build quality, release consistency, and delivery velocity across platforms.
You'll work closely with engineers, QA, and product managers to support day-to-day release needs, review cherry-picks, manage builds, validate artifacts, and continuously improve our tooling and processes.
Responsibilities:
Support engineering teams with day-to-day release tasks, including cherry-pick reviews, build validations, and release coordination
Own and maintain CI/CD pipelines using Bitrise, GitHub Actions, GitHub Runners, Advanced GH Workflows and CircleCI
Automate and improve build/test/release workflows via scripting (Bash, TypeScript/JavaScript, etc.)
Monitor build health, troubleshoot issues, and proactively resolve blockers
Ensure proper versioning, tagging, and signing of builds for mobile and browser extension platforms
Submit and track releases on App Store, Play Store, and extension stores (Chrome, Firefox, Edge)
Partner with QA to integrate and validate unit, UI, and E2E tests in release workflows
Maintain clear documentation and checklists for release processes
Coordinate feature flag rollouts using LaunchDarkly in collaboration with engineering and product
What we're looking for:
3+ years of experience in release engineering, CI/CD, or build/release coordination (not just infrastructure)
Experience supporting cross-functional teams and reviewing/cherry-picking production code branches
Deep knowledge of CI/CD tools: Bitrise, GitHub Actions, CircleCI, GitHub Runners
Strong scripting ability, GH Actions, YML-based systems
Git workflows (trunk-based, Git Flow) and semantic versioning
Excellent communication skills and the ability to support fast-moving teams
Would be great if you brought this to the role
Experience with LaunchDarkly or other feature flag systems
Understanding of staged rollouts, crash monitoring, and store compliance processes
Familiarity with extension-specific policies (e.g., CSP, permission management)
iOS builds & release (Fastlane, Xcode, TestFlight, App Store Connect)
Android builds & release (Gradle, Play Console)
Browser extensions (manifest v2/v3, WebExtension APIs, Chrome/Firefox/Edge stores)
Contributions to release tools, automation frameworks, or documentation in open source or prior roles
Don't meet all the requirements? Don't sweat it. We're passionate about building a diverse team of humans and as such, if you think you've got what it takes for our chaotic-but-fun, remote-friendly, start-up environment-apply anyway, detailing your relevant transferable skills in your cover letter. While we have a pretty good idea of what we need, we're ready for you to challenge our thinking on who needs to be in this role.
It is a requirement of employment in this position that applicants will be required to submit to background checks including but not limited to employment, education and criminal record checks. Further details will be provided to applicants that successfully meet the criteria for the position as determined by the company in its sole discretion. By submitting an application for employment, you are acknowledging and consenting to this requirement.
Auto-ApplyLinux Engineer
Devops engineer job in Jersey City, NJ
US Tech Solutions is a global staff augmentation firm providing a wide-range of talent on-demand and total workforce solutions. We have excellent domain expertise in all verticals. We provide long term solutions with quality as our main focus. To learn more about US Tech Solutions, please visit our website ************************
Qualifications
• 7-10 years of experience in Linux Redhat administration at small and medium scale environments.
• Troubleshooting database, application and services on Redhat Linux.
• Extensive Redhat Linux networking knowledge including but not limited to VLANs, bonding, link aggregation.
• Install, configure, patch and automate Redhat Linux server builds using kickstart/puppet automation.
• 3-5 years of experience with IBM AIX Unix administration at small and medium scale environments.
• System migrations in the Redhat Linux and UNIX (IBM AIX) environments.
• Expert in installation, administration, maintenance, and troubleshooting of Redhat Linux and IBM AIX Unix servers.
Tech Specs:
• Redhat Linux System Architecture and Administration
• Redhat 5.10, 6.2 and 6.5
• System build automation using Kickstart, Puppet and NIM
• IBM AIX Unix Administration
• FCoE Technology
• EMC Storage implementation
• Script writing
• LDAP
Additional Information
Need local Candidates Only.
Software Engineer, Data Platform
Devops engineer job in New York, NY
About Sony Music Entertainment
At Sony Music Entertainment, we fuel the creative journey. We've played a pioneering role in music history, from the first-ever music label to the invention of the flat disc record. We've nurtured some of music's most iconic artists and produced some of the most influential recordings of all time.
Today, we work in more than 70 countries, supporting a diverse roster of international superstars, developing and independent artists, and visionary creators. From our position at the intersection of music, entertainment, and technology, we bring imagination and expertise to the newest products and platforms, embrace new business models, employ breakthrough tools, and provide powerful insights that help our artists push creative boundaries and reach new audiences. In everything we do, we're committed to artistic integrity, transparency, and entrepreneurship.
Sony Music Entertainment is a member of the Sony family of global companies.
Sony Music's Product Design Engineering and Global Operations (PDEGO) Team is looking for a Software Engineer to join our data platform development team in our 25 Madison Ave Office (NYC).
As a Software Engineer with PDEGO:
What you'll do:
Work with a cross-functional team to build products that empower artists and record labels across the globe
Contribute to all tiers of our architecture to produce high quality, robust user experiences
Write clean, tested, maintainable code
Work closely with product management to understand client requirements
Participate in re-architecture, refinement and technical design of various systems
Who you are:
We're seeking a Data Platform Software Engineer with 4+ years of experience in the following areas:
Strong experience implementing distributed systems and scalable data solutions.
Proficiency in a scripting language, such as Python, for data pipeline development and automation.
Deep experience with various database and data warehousing technologies and query languages (we use Neo4j, Kafka, MySQL, Snowflake, Elasticsearch, and more).
Experience designing, building, and maintaining robust ETL/ELT data pipelines, including orchestration tools like Airflow, for large-scale data ingestion and transformation.
Experience with data transformation tools such as dbt (data build tool).
Familiar with techniques for data quality monitoring and ensuring data reliability.
Experience with microservices, APIs, and related standards such as REST and HTTP.
Comfortable with AWS Cloud technologies and cloud-native data services.
Experience with Terraform or similar Infrastructure as Code (IaC) tools for managing and provisioning cloud resources.
Experience using log analysis and monitoring tools to investigate data issues and pipeline performance.
Experience writing unit and integration tests to ensure data integrity.
Experience working in an agile team.
What we give you:
You join an inclusive, collaborative and global community where you have the opportunity to channel your passion every day
A modern office environment designed to foster productivity, creativity, and teamwork empowering you to bring your best
An attractive and comprehensive benefits package including medical, dental, vision, life & disability coverage, and 401K + employer matching
Voluntary benefits like company-paid identity theft protection and resources for pets, mental health and meditation resources, industry-leading fertility coverage, fully paid leave for childbirth or bonding, fully paid leave for caregivers, programs for loved ones with developmental disabilities and neurodiversity, subsidized back-up child and elder care, and reimbursement for adoption, surrogacy, tuition, and student loans
Investment in your professional growth and development enabling you to thrive in our vibrant community.
The space to accelerate progress, positively disrupt, and create what happens next
Time off for a winter recess
Sony Music is committed to providing equal employment opportunity for all persons regardless of age, disability, national origin, race, color, religion, sex, sexual orientation, gender, gender identity or expression, pregnancy, veteran or military status, marital and civil partnership/union status, alienage or citizenship status, creed, genetic information or any other status protected by applicable federal, state, or local law.
The anticipated annual base salary does not include any other compensation components or other benefits that an individual may be eligible for. The actual base salary offered depends on a variety of factors, which may include as applicable, the qualifications of the individual applicant for the position, years of relevant experience, specific and unique skills, level of education attained, certifications or other professional licenses held, and the location in which the applicant lives and/or from which they will be performing the job.New York Pay Range$130,434-$143,478 USD
Auto-ApplyLinux Engineer
Devops engineer job in New York, NY
Linux Engineers work with the critical infrastructure underlying the rest of the firm's technology. Members of this group are hard-working Systems Engineers, Administrators and Programmers, tasked with maintaining and improving the platform that powers Jane Street's production trading systems. Our mix of in-house and open source software allows you to investigate and innovate at every level. On any given day, you could be debugging kernel performance, developing management tools, or resolving production issues in real time. Diving into tricky systems problems is our specialty.
Deployment automation, scalable configuration management, and obsessive monitoring are the focus of some of our ongoing projects. We automate as much of our work as we can, but not because we are lazy. We find that automation reduces our error rate and overall workload - plus, we think it's fun.
Working in our group provides opportunities for involvement with almost every other facet of the company. We work directly with colleagues in Trading, Technology, and Operations to build and maintain systems with a firm-wide scope. Using feedback from other groups and our custom monitoring tools, we strive to resolve production issues quickly, perform comprehensive root-cause analyses, and integrate long-term fixes in a clean and robust way.
About You
We are looking to hire Systems Programmers and Administrators with a deep knowledge of Unix internals and the Linux ecosystem. Candidates should have a willingness to learn OCaml, our language of choice, and meet the following requirements:
Bachelor's degree in Computer Science, Software Engineering or other technical discipline (or equivalent experience)
Clear and concise communication skills, as well as the ability to efficiently analyze and deconstruct technical problems
Deep knowledge of operating system fundamentals, especially Linux
Fluency with the Unix command line and shell scripting
Practical experience with modern Linux systems and systems programming concepts like C, sockets, virtual memory, and the process life cycle
Basic understanding of network protocols
Strong troubleshooting skills and knowledge of profiling/debugging tools such as gdb, perf, DTrace, eBPF, or SystemTap
Programming experience in any language (functional languages a plus)
If you're a recruiting agency and want to partner with us, please reach out to
**********************************
.
Auto-ApplyHadoop Engineer/Unix/Linux
Devops engineer job in New York, NY
ITConnectUS provides wide range of Consulting| Web Design| Application Development| IT Staffing. We believe in the principle of delivering the highest quality products at the best price..
Job Description
Requirement:
• As part of the implementation of the above described initiatives, LCR is looking to onboard an Applications Engineer to work under the engineering lead within
the department for setup, integration, management and level-4 support of the related platforms and infrastructure.
Position Description
• The Application Engineer will be part of LCR's engineering team and will help on-board solutions into our secure enclave, purchase and configure hardware, be
involved in software security architecture reviews, provide Level 4 support during hardware and software outages, and work on anything that requires technology
hands-on engineering.
• The engineer will also work intimately with the Development and Support teams to understand the business needs and making recommendations to ensure the invest
ed technologies will integrate and conform to the PA and Firm platforms and standards.
Technical Skills:
• Must have hands on experience and knowledge on Hadoop technology (especially Cloudera)
• Full hands on knowledge of Operating Systems (mainly Linux and Windows Server 2008) and virtualization technologies like VMWare.
• Strong Shell and Perl scripting skills
• Very good working knowledge of technologies like web servers (Apache), Application Servers (Web Sphere, Tom Cat, .Net), Database (DB2 and MSSQL Server, MySQL)
, Single Sign-On (Siteminder), storage technologies (NAS / SAN from an implementation perspective not expected to know it from a configuration perspective)
• Familiar with Unix troubleshooting skills to include performance, kernel parameters, network, TCP/IP settings
• Ability to package middleware products on Unix and Windows with automation of installation / deployment
• Basic familiarity of relational database and query languages, data structures, indexing etc. and its impact on the application performance
• Should be familiar with basic security policies for secure hosting solution, Kerberos
• Should have performed capacity planning and performance tuning exercise
• Familiar with Server/Global Load Balancing (F5) and web proxies technologies
• Exposure the Automation tools like Puppet, Log management tool like Splunk and Unstructured Database technologies like MarkLogic and MongoDB.
Experience
• Should have Total 10 years of experience with a minimum of 7 years as an application / Unix engineer.
• Prior experience of working in a global financial organization
Additional Information
Thanks and Regards,
Happy Singh
847 258 9595 Ext:- 408
happy.singh(@)itconnectus.com
NLP Engineer
Devops engineer job in New York, NY
Mercor is hiring an NLP Engineer on behalf of a leading AI lab. In this role, you'll build language pipelines for classification, retrieval-augmented generation (RAG), and tokenization. You'll design robust text analytics and evaluation frameworks that scale across multilingual corpora, powering advanced AI-driven systems. This role is ideal for candidates with a strong background in natural language processing, applied machine learning, and scalable text engineering.
* * * ### **You're a great fit if you:** - Have a background in **computer science, computational linguistics, or related fields**. - Are proficient with **Hugging Face Transformers, spa Cy, tokenizers, and PyTorch**. - Have experience working with text formats like **JSON/JSONL** and building scalable data pipelines. - Understand **NLP tasks** such as classification, entity recognition, tokenization, and retrieval. - Are comfortable working with **multilingual corpora** and designing evaluation benchmarks. - Have strong experience with **text preprocessing, embedding generation, and model fine-tuning**. - Are curious about building **RAG systems and neural search pipelines** that combine IR and NLP. * * * ### **Primary Goal of This Role** To design and ship NLP pipelines for classification, tokenization, and RAG that can handle large-scale, multilingual corpora, with robust frameworks for text analytics, retrieval, and evaluation. * * * ### **What You'll Do** - Build NLP pipelines for **classification, tokenization, and RAG** tasks. - Design scalable **text analytics** workflows that support multilingual datasets. - Implement and fine-tune models with **Hugging Face Transformers, PyTorch, and fast Text**. - Develop evaluation frameworks for model performance across diverse corpora. - Integrate NLP solutions into broader AI pipelines, including **search and retrieval systems**. - Collaborate with AI researchers and engineers to ship **robust, production-grade NLP systems**. * * * ### **Why This Role Is Exciting** - You'll pioneer **RAG and NLP pipelines** that directly power next-generation AI applications. - You'll work with **state-of-the-art libraries** and frameworks in NLP and IR. - You'll contribute to **multilingual, global-scale AI systems**. - You'll operate at the **intersection of language, AI, and scalable engineering**. * * * ### **Pay & Work Structure** - You'll be classified as an hourly contractor to Mercor. - Paid weekly via Stripe Connect, based on hours logged. - Part-time (20-30 hrs/week) with flexible hours-work from anywhere, on your schedule. - Weekly Bonus of **$500-$1000 USD** per 5 tasks. - Remote and flexible working style.