Post job

Data Engineer jobs at EMW

- 2833 jobs
  • Data Engineer

    DL Software Inc. 3.3company rating

    New York, NY jobs

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 4d ago
  • Data Engineer

    Pyramid Consulting, Inc. 4.1company rating

    McLean, VA jobs

    Immediate need for a talented Data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-93504 Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services. Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets. Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning. Develop backend and automation tools using Golang and/or Python as needed. Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch. Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges. Perform root-cause analysis and implement automation to prevent recurring issues. Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access. Ensure compliance with enterprise governance, data quality, and cloud security standards. Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality. Key Requirements and Technology Experience: Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda). Proficiency in Python with experience building scalable data pipelines or ETL processes. Strong hands-on experience with Spark/PySpark for distributed data processing. Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning. Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM). Experience with Golang for scripting, backend services, or performance-critical processes. Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems. Familiarity with CI/CD workflows, Git, and automated testing. Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $70-75 hourly 1d ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    New York, NY jobs

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 1d ago
  • Senior Data Engineer.

    Pyramid Consulting, Inc. 4.1company rating

    McLean, VA jobs

    Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-84666 Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift. Build data integration solutions between transaction systems and analytics platforms. Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs. Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting. Fundamental understanding of building of data products by data enrichment and ML. Act as a team player and share knowledge with the existing team members. Key Requirements and Technology Experience: Key skills; Python, AWS, SNOWFLAKE Bachelor's degree in computer science or a related field. Minimum 5 years of experience in building data driven solutions. At least 3 years of experience working with AWS services. Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position. Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools. Strong scripting experience using Python and SQL. Working knowledge of foundational AWS compute, storage, networking and IAM. Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus. Solid scripting experience in AWS using Lambda functions. Knowledge of CloudFormation template preferred. Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client. Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services. Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions. Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc. Strong understanding of data security - authorization, authentication, encryption, and network security. Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred. Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred. Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts. Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures. Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables. Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions. Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $64-68 hourly 3d ago
  • Data Scientist

    Unisys 4.6company rating

    Reston, VA jobs

    • Collect, clean, and preprocess large datasets from multiple sources. • Apply statistical analysis and machine learning techniques to solve business problems. • Build predictive models and algorithms to optimize processes and improve outcomes. • Develop dashboards and visualizations to communicate insights effectively. • Collaborate with cross-functional teams (Product, Engineering, Risk, Marketing) to identify opportunities for leveraging data. • Ensure data integrity, security, and compliance with organizational standards. • Stay current with emerging technologies and best practices in data science and AI. ________________________________________ Required Qualifications • Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, or related field. • Strong proficiency in Python, R, SQL, and experience with data manipulation libraries (e.g., Pandas, NumPy). • Hands-on experience with machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). • Solid understanding of statistical modeling, hypothesis testing, and data visualization. • Experience with big data platforms (e.g., Spark, Hadoop) and cloud environments (AWS, Azure, GCP). • Excellent problem-solving skills and ability to communicate complex concepts clearly. ________________________________________ Preferred Qualifications • Experience in risk modeling, financial services, or product analytics. • Knowledge of MLOps and deploying models in production. • Familiarity with data governance and compliance frameworks. ________________________________________ Soft Skills • Strong analytical thinking and attention to detail. • Ability to work independently and in a team environment. • Effective communication and stakeholder management skills. #LI-CGTS #TS-0455
    $71k-96k yearly est. 3d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 4d ago
  • Cloud Data Engineer

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Title: Enterprise Data Management - Data Cloud, Senior Developer I Duration: FTE/Permanent Salary: 130-165k The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools. Responsibilities: Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures. Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications. Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools. Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency. Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation. Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications. Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access. Champion data engineering standards, best practices, and governance across the enterprise. Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning. Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions. Qualifications: Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent. 10+ years in data engineering, cloud platform development, or analytics engineering. Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies. Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers). Preferred: Experience containerizing data workloads with Docker and Kubernetes. Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios. Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products. Bonus: Background in data mesh designs and automated data access systems. Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow. Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines. Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling. Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role. Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
    $86k-120k yearly est. 4d ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 4d ago
  • Cloud Data Engineer- Databricks

    Infocepts 3.7company rating

    McLean, VA jobs

    Purpose: We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions. Key Result Areas and Activities: Design and implement robust, scalable data engineering solutions. Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI. Collaborate with analytics and AI teams to enable real-time and batch data workflows. Support and improve cloud-native data platforms (AWS, Azure, GCP). Ensure adherence to best practices in data modeling, warehousing, and governance. Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices. Implement and maintain workflow orchestration tools like Apache Airflow and dbt. Roles & Responsibilities Essential Skills 4+ years of experience in data engineering with a focus on scalable solutions. Strong hands-on experience with Databricks in a cloud environment. Proficiency in Spark and Python for data processing. Solid understanding of data modeling, data warehousing, and architecture principles. Experience working with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with CI/CD pipelines and data workflow automation. Desirable Skills Direct experience with Unity Catalog and Mosaic AI within Databricks. Working knowledge of DevOps/DataOps principles in a data engineering context. Exposure to Apache Airflow, dbt, and modern data orchestration frameworks. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $77k-105k yearly est. 3d ago
  • Big Data Developer

    Capgemini 4.5company rating

    New York, NY jobs

    We're looking for a seasoned Senior Data Engineer with strong Hadoop to design, build, and scale data pipelines and platforms powering analytics, AI/ML, and business operations. You'll own end-to-end data engineering-from ingestion and transformation to performance optimization-across large-scale distributed systems and modern cloud data platforms. Key Responsibilities Design & Build Data Pipelines: Architect, develop, and maintain robust ETL/ELT pipelines for batch and streaming data using Hadoop ecosystem, Spark, and Airflow. Big Data Architecture: Define and implement scalable big data architectures, ensuring reliability, fault tolerance, and cost efficiency. Data Modeling: Develop and optimize data models for Data Warehouse and Operational Data Store (ODS); ensure conformed dimensions and star/snowflake schemas where appropriate. SQL Expertise: Write, optimize, and review complex SQL/HiveQL queries for large datasets; enforce query standards and patterns. Performance Tuning: Optimize Spark jobs, SQL queries, storage formats (e.g., Parquet/ORC), partitioning, and indexing to improve latency and throughput. Data Quality & Governance: Implement data validation, lineage, cataloging, and security controls across environments. Workflow Orchestration: Build and manage DAGs in Airflow, ensuring observability, retries, alerting, and SLAs. Cross-functional Collaboration: Partner with Data Science, Analytics, and Product teams to deliver reliable datasets and features. Best Practices: Champion coding standards, CI/CD, infrastructure-as-code (IaC), and documentation across the data platform. Required Qualifications 7+ years of hands-on data engineering experience building production-grade pipelines. Strong experience with Hadoop (HDFS, YARN), Hive SQL/HiveQL, Spark (Scala/Java/PySpark), and Airflow. Expert-level SQL skills with the ability to write and tune complex queries on large datasets. Solid understanding of Big Data architecture patterns (e.g., lakehouse, data lake + warehouse, CDC). Deep knowledge of ETL/ELT and DW/ODS concepts (slowly changing dimensions, partitioning, columnar storage, incremental loads). Proven track record in performance tuning for large-scale systems (Spark jobs, shuffle optimizations, broadcast joins, skew handling). Strong programming background in Java and/or Scala (Python is a plus). Preferred Skills Experience with AI-driven data processing (feature engineering pipelines, ML-ready datasets, model data dependencies). Hands-on with cloud data platforms (AWS, GCP, or Azure)-services like EMR/Dataproc/HDInsight, S3/GCS/ADLS, Glue/Dataflow, BigQuery/Snowflake/Redshift/Synapse. Exposure to NoSQL databases (Cassandra, HBase, DynamoDB, MongoDB). Advanced data governance & security (row/column-level security, tokenization, encryption at rest/in transit, IAM/RBAC, data lineage/catalog). Familiarity with Kafka (topics, partitions, consumer groups, schema registry, stream processing). Experience with CI/CD for data (Git, Jenkins/GitHub Actions, Terraform), containerization (Docker, Kubernetes). Knowledge of metadata management and data observability (Great Expectations, Monte Carlo, OpenLineage). Life at Capgemini: Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work Healthcare including dental, vision, mental health, and well-being programs Financial well-being programs such as 401(k) and Employee Share Ownership Plan Paid time off and paid holidays Paid parental leave Family building benefits like adoption assistance, surrogacy, and cryopreservation Social well-being benefits like subsidized back-up child/elder care and tutoring Mentoring, coaching and learning programs Employee Resource Groups Disaster Relief Disclaimer: Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $87k-130k yearly est. 5d ago
  • Lead Data Engineer with Banking

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below). The Role Responsibilities: Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python. Lead the implementation of distributed data processing and real-time streaming solutions. Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency. Develop and automate data workflows and transformations using Python scripting. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Monitor, troubleshoot, and optimize data pipelines and platform performance. Ensure data quality, governance, and security standards are upheld. Guide and mentor junior team members and foster best practices in data engineering. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing frameworks and streaming architectures. Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management. Proficiency in Python for data manipulation, automation, and scripting. Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams. Solid understanding of SQL, data modeling, and ETL/ELT processes. Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous. Strong troubleshooting skills and ability to optimize data workflows. Excellent communication and collaboration skills. Preferred, but not required: Bachelor's or Master's degree in Computer Science, Information Systems, or related field. Experience with containerization (Docker, Kubernetes) is a plus. Knowledge of data security best practices and GDPR compliance. Certifications related to cloud platforms or data engineering preferred. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-140k yearly 5d ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 1d ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA jobs

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 2d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    New York, NY jobs

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 1d ago
  • Cloud Data Architect

    Infocepts 3.7company rating

    McLean, VA jobs

    Purpose: As a Cloud Data Architect, you'll be at the forefront of innovation - guiding clients and teams through the design and implementation of cutting-edge solutions using Databricks, modern data platforms, and cloud-native technologies. In this role, you won't just architect solutions -you'll help grow a thriving Analytics & Data Management practice, act as a trusted Databricks SME, and bring a business-first mindset to every challenge. You'll have the opportunity to lead delivery efforts, build transformative data solutions, and cultivate strategic relationships with Fortune-500 organizations. Key Result Areas and Activities: Architect and deliver scalable, cloud-native data solutions across various industries. Lead data strategy workshops and AI/ML readiness assessments. Develop solution blueprints leveraging Databricks (Lakehouse, Delta Lake, MLflow, Unity Catalog). Conduct architecture reviews and build proof-of-concept (PoC) prototypes on platforms like Databricks, AWS, Azure, and Snowflake. Engage with stakeholders to define and align future-state data strategies with business outcomes. Mentor and lead data engineering and architecture teams. Drive innovation and thought leadership across client engagements and internal practice areas. Promote FinOps practices, ensuring cost optimization within multi-cloud deployments. Support client relationship management and engagement expansion through consulting excellence. Roles & Responsibilities Essential Skills: 10+ years of experience designing and delivering scalable data architecture and solutions. 5+ years in consulting, with demonstrated client-facing leadership. Expertise in Databricks ecosystem including Delta Lake, Lakehouse, Unity Catalog, and MLflow. Strong hands-on knowledge of cloud platforms (Azure, AWS, Databricks, and Snowflake). Proficiency in Spark and Python for data engineering and processing tasks. Solid grasp of enterprise data architecture frameworks such as TOGAF and DAMA. Demonstrated ability to lead and mentor teams, manage multiple projects, and drive delivery excellence. Excellent communication skills with proven ability to consult and influence executive stakeholders. Desirable Skills Recognized thought leadership in emerging data and AI technologies. Experience with FinOps in multi-cloud environments, particularly with Databricks and AWS cost optimization. Familiarity with data governance and data quality best practices at the enterprise level. Knowledge of DevOps and MLOps pipelines in cloud environments. Qualifications: Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or related fields. Professional certifications in Databricks, AWS, Azure, or Snowflake preferred. TOGAF, DAMA, or other architecture framework certifications are a plus. Qualities: Self-motivated and focused on delivering outcomes for a fast growing team and firm Able to communicate persuasively through speaking, writing, and client presentations Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work with teams and clients in different time zones Research focused mindset "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $87k-121k yearly est. 3d ago
  • Azure Data Architect

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge: We are seeking an experienced Azure Data Architect to design and implement scalable, secure, and efficient data solutions on Azure Cloud for our financial services client. The architect will lead the development of data platforms using Azure services and Databricks, ensuring robust data architecture that supports business objectives and regulatory compliance. Additional Information The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within New York City, NY is $135K to $150K/year & benefits (see below). Key Responsibilities: Design and develop end-to-end data architecture on Azure Cloud, including data ingestion, storage, processing, and analytics solutions. Lead the deployment of Databricks environments and integrate them seamlessly with other Azure services. Collaborate with stakeholders to gather requirements and translate them into architectural designs. Ensure data security, privacy, and compliance standards are met within the architecture. Optimize data workflows and pipelines for performance and cost-efficiency. Provide technical guidance and mentorship to development teams. Keep abreast of the latest Azure and Databricks technologies and incorporate best practices. Qualifications: Extensive experience designing and implementing data architectures on Azure Cloud. Deep understanding of Databricks platform and its integration with Azure services. Strong knowledge of data warehousing, data lakes, and real-time streaming solutions. Proficiency in SQL, Python, Scala, or Spark. Experience with Azure Data Factory, Azure Data Lake, Azure SQL, and Azure Synapse Analytics. Solid understanding of security, governance, and compliance in cloud data solutions. Preferred, but not required: Experience working in the financial services domain. Knowledge of machine learning and AI integration within data platforms. Familiarity with other cloud platforms like AWS or GCP. Certifications such as Azure Solutions Architect Expert, Azure Data Engineer, or Databricks Certification. Certifications: Azure Solutions Architect Expert Azure Data Engineer Associate Databricks Certification (Certified Data Engineer or similar) We offer: A highly competitive compensation and benefits package A multinational organization with 58 offices in 21 countries and the possibility to work abroad 10 days of paid annual leave (plus sick leave and national holidays) Maternity & Paternity leave plans A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region) Retirement savings plans A higher education certification policy Commuter benefits (varies by region) Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms A flat and approachable organization A truly diverse, fun-loving and global work culture SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-150k yearly 4d ago
  • Java Software Engineer

    Lancesoft, Inc. 4.5company rating

    Richmond, VA jobs

    Pay $65-75/hr Looking for richmond & Mclean, VA hybrid Application developer is responsible for designing, developing, testing, deployment and maintaining the services following the enterprise standards and guidelines with the thorough understanding of cloud environment and DevSecOps. Basic Qualifications - Bachelor Degree in Computer science or related field - At least 5 years of experience in software development including design, coding and testing - At least 2 years of experience in Java, Spring and SprintBoot frameworks - At least 3 years of experience in design and development of Restful APIs - At least 3 years of experience in using the DEVOPS tools for pipelines - At least 2 years of experience in Cloud environment using various compute, storage and network services Preferred Qualifications - Experience in Identity and Access Management related services using SAML, OIDC/OAUTH - Experience AWS Cloud environment for various services - Experience in Application/Data security with the understanding of OWASP top 10
    $65-75 hourly 1d ago
  • Senior Dotnet Developer

    Prutech Solutions, Inc. 4.6company rating

    New York, NY jobs

    Application Developer Qualifications and Requirements: 14+ years of professional software development experience. Expert proficiency in C# and the .NET / .NET Core framework. 3+ years of experience working specifically with HL7 messaging standards (v2), including detailed knowledge of segments like PID, PV1, OBR, ORC, and message types like ORM (Orders) and ORU (Results). Demonstrable experience developing and deploying services using ASP.NET Core (Web API, Microservices). Strong understanding of modern architectural patterns (e.g., Microservices, Event-Driven Architecture). Proficiency in SQL and experience with SQL Server, including stored procedures and complex query optimization. Experience with SSIS packages. Experience with reporting tools such as SSRS, Power BI, or similar platforms. Familiarity with cloud platforms (preferably Azure, including App Services, Functions, and Service Bus/Event Hub). Bachelor's degree in computer science or a related field. EEOE
    $109k-144k yearly est. 1d ago
  • Senior Developer

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA jobs

    The candidate must have experience with both Java and Python, in addition to PowerShell and AI/ML tools. Must Have Qualifications: Python and Java development, strong understanding of design and explain and educate developers. Key Responsibilities: Design and implement developer workspaces using physical, virtualized, or browser-based solutions. Develop tools primarily in Python and Java to enhance developer workflows. Advocate for and implement CI/CD improvements through new tooling and commonly available libraries. Create patterns to manage desktop provisioning and software package management using SCCM, VDI, or similar technologies. Lead initiatives to integrate Generative AI capabilities into Developer workflows, enhancing the value proposition for customers. Partner with end-user collaboration suites to create seamless developer experiences. Ensure all solutions meet audit, risk, and governance requirements. Evangelize best practices and solutions within the developer community.
    $90k-118k yearly est. 5d ago
  • Senior Software Engineer -- KUMDC5680656

    Compunnel Inc. 4.4company rating

    McLean, VA jobs

    Required Technical Skills (Required) Strong design and development skills in two or more of the following technologies and tools: Java (3-5 years) Cucumber(3-5 years), JBehave or other BDD testing frameworks At least 8 years of test automation framework design Strong experience in testing Webservices (REST APIs) (3+5 years) Proven experience developing test scripts, test cases, and test data The ability to write queries in SQL or other relational databases 3+ years of experience in developing scenario based performance testing using JMeter Experience testing full stack and integration testing with 3rd parties End-to-end system integration testing experience for software platforms (Desired) Hands on experience with Python development experience in AWS Cloud technology Experience in TDD, continuous integration, code review practice is strongly desired Experience with Apigee or other API gateways is a plus Experience with DevOps concepts and tools (e.g., CI/CD, Jenkins, Git) At least 2 years working on an Agile team with a solid understanding of Agile/Lean practices. Understanding of a micro service Architecture Experience load and performance testing Strong documentation skills
    $88k-114k yearly est. 1d ago

Learn more about EMW jobs

Most common jobs at EMW