Post job

Data Engineer jobs at Mindex

- 1157 jobs
  • Data Engineer

    DL Software Inc. 3.3company rating

    New York, NY jobs

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 1d ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    New York, NY jobs

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 3d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 1d ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 1d ago
  • Cloud Data Engineer

    Gotham Technology Group 4.5company rating

    New York, NY jobs

    Title: Enterprise Data Management - Data Cloud, Senior Developer I Duration: FTE/Permanent Salary: 130-165k The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools. Responsibilities: Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures. Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications. Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools. Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency. Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation. Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications. Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access. Champion data engineering standards, best practices, and governance across the enterprise. Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning. Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions. Qualifications: Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent. 10+ years in data engineering, cloud platform development, or analytics engineering. Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies. Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers). Preferred: Experience containerizing data workloads with Docker and Kubernetes. Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios. Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products. Bonus: Background in data mesh designs and automated data access systems. Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow. Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines. Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling. Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role. Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
    $86k-120k yearly est. 1d ago
  • Big Data Developer

    Capgemini 4.5company rating

    New York, NY jobs

    We're looking for a seasoned Senior Data Engineer with strong Hadoop to design, build, and scale data pipelines and platforms powering analytics, AI/ML, and business operations. You'll own end-to-end data engineering-from ingestion and transformation to performance optimization-across large-scale distributed systems and modern cloud data platforms. Key Responsibilities Design & Build Data Pipelines: Architect, develop, and maintain robust ETL/ELT pipelines for batch and streaming data using Hadoop ecosystem, Spark, and Airflow. Big Data Architecture: Define and implement scalable big data architectures, ensuring reliability, fault tolerance, and cost efficiency. Data Modeling: Develop and optimize data models for Data Warehouse and Operational Data Store (ODS); ensure conformed dimensions and star/snowflake schemas where appropriate. SQL Expertise: Write, optimize, and review complex SQL/HiveQL queries for large datasets; enforce query standards and patterns. Performance Tuning: Optimize Spark jobs, SQL queries, storage formats (e.g., Parquet/ORC), partitioning, and indexing to improve latency and throughput. Data Quality & Governance: Implement data validation, lineage, cataloging, and security controls across environments. Workflow Orchestration: Build and manage DAGs in Airflow, ensuring observability, retries, alerting, and SLAs. Cross-functional Collaboration: Partner with Data Science, Analytics, and Product teams to deliver reliable datasets and features. Best Practices: Champion coding standards, CI/CD, infrastructure-as-code (IaC), and documentation across the data platform. Required Qualifications 7+ years of hands-on data engineering experience building production-grade pipelines. Strong experience with Hadoop (HDFS, YARN), Hive SQL/HiveQL, Spark (Scala/Java/PySpark), and Airflow. Expert-level SQL skills with the ability to write and tune complex queries on large datasets. Solid understanding of Big Data architecture patterns (e.g., lakehouse, data lake + warehouse, CDC). Deep knowledge of ETL/ELT and DW/ODS concepts (slowly changing dimensions, partitioning, columnar storage, incremental loads). Proven track record in performance tuning for large-scale systems (Spark jobs, shuffle optimizations, broadcast joins, skew handling). Strong programming background in Java and/or Scala (Python is a plus). Preferred Skills Experience with AI-driven data processing (feature engineering pipelines, ML-ready datasets, model data dependencies). Hands-on with cloud data platforms (AWS, GCP, or Azure)-services like EMR/Dataproc/HDInsight, S3/GCS/ADLS, Glue/Dataflow, BigQuery/Snowflake/Redshift/Synapse. Exposure to NoSQL databases (Cassandra, HBase, DynamoDB, MongoDB). Advanced data governance & security (row/column-level security, tokenization, encryption at rest/in transit, IAM/RBAC, data lineage/catalog). Familiarity with Kafka (topics, partitions, consumer groups, schema registry, stream processing). Experience with CI/CD for data (Git, Jenkins/GitHub Actions, Terraform), containerization (Docker, Kubernetes). Knowledge of metadata management and data observability (Great Expectations, Monte Carlo, OpenLineage). Life at Capgemini: Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work Healthcare including dental, vision, mental health, and well-being programs Financial well-being programs such as 401(k) and Employee Share Ownership Plan Paid time off and paid holidays Paid parental leave Family building benefits like adoption assistance, surrogacy, and cryopreservation Social well-being benefits like subsidized back-up child/elder care and tutoring Mentoring, coaching and learning programs Employee Resource Groups Disaster Relief Disclaimer: Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $87k-130k yearly est. 2d ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 3d ago
  • Lead Data Engineer with Banking

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below). The Role Responsibilities: Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python. Lead the implementation of distributed data processing and real-time streaming solutions. Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency. Develop and automate data workflows and transformations using Python scripting. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Monitor, troubleshoot, and optimize data pipelines and platform performance. Ensure data quality, governance, and security standards are upheld. Guide and mentor junior team members and foster best practices in data engineering. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing frameworks and streaming architectures. Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management. Proficiency in Python for data manipulation, automation, and scripting. Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams. Solid understanding of SQL, data modeling, and ETL/ELT processes. Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous. Strong troubleshooting skills and ability to optimize data workflows. Excellent communication and collaboration skills. Preferred, but not required: Bachelor's or Master's degree in Computer Science, Information Systems, or related field. Experience with containerization (Docker, Kubernetes) is a plus. Knowledge of data security best practices and GDPR compliance. Certifications related to cloud platforms or data engineering preferred. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-140k yearly 2d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    New York, NY jobs

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 3d ago
  • Azure Data Architect

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge: We are seeking an experienced Azure Data Architect to design and implement scalable, secure, and efficient data solutions on Azure Cloud for our financial services client. The architect will lead the development of data platforms using Azure services and Databricks, ensuring robust data architecture that supports business objectives and regulatory compliance. Additional Information The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within New York City, NY is $135K to $150K/year & benefits (see below). Key Responsibilities: Design and develop end-to-end data architecture on Azure Cloud, including data ingestion, storage, processing, and analytics solutions. Lead the deployment of Databricks environments and integrate them seamlessly with other Azure services. Collaborate with stakeholders to gather requirements and translate them into architectural designs. Ensure data security, privacy, and compliance standards are met within the architecture. Optimize data workflows and pipelines for performance and cost-efficiency. Provide technical guidance and mentorship to development teams. Keep abreast of the latest Azure and Databricks technologies and incorporate best practices. Qualifications: Extensive experience designing and implementing data architectures on Azure Cloud. Deep understanding of Databricks platform and its integration with Azure services. Strong knowledge of data warehousing, data lakes, and real-time streaming solutions. Proficiency in SQL, Python, Scala, or Spark. Experience with Azure Data Factory, Azure Data Lake, Azure SQL, and Azure Synapse Analytics. Solid understanding of security, governance, and compliance in cloud data solutions. Preferred, but not required: Experience working in the financial services domain. Knowledge of machine learning and AI integration within data platforms. Familiarity with other cloud platforms like AWS or GCP. Certifications such as Azure Solutions Architect Expert, Azure Data Engineer, or Databricks Certification. Certifications: Azure Solutions Architect Expert Azure Data Engineer Associate Databricks Certification (Certified Data Engineer or similar) We offer: A highly competitive compensation and benefits package A multinational organization with 58 offices in 21 countries and the possibility to work abroad 10 days of paid annual leave (plus sick leave and national holidays) Maternity & Paternity leave plans A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region) Retirement savings plans A higher education certification policy Commuter benefits (varies by region) Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms A flat and approachable organization A truly diverse, fun-loving and global work culture SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-150k yearly 1d ago
  • AI ML Engineer

    Synechron 4.4company rating

    New York, NY jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking a highly experienced and innovative Senior AI/ML Engineer with industry expertise to lead the development of scalable machine learning systems. In this pivotal role, there will be architecture and implementation of advanced AI solutions, guide strategic AI initiatives, and collaboration with cross-functional teams to drive innovation. The technical leadership will be instrumental in shaping our AI roadmap and delivering state-of-the-art models and systems. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $145k/year & benefits (see below). The Role Responsibilities: Design, develop, and optimize scalable machine learning systems capable of handling large-scale data and complex models. Develop advanced statistical models to solve complex business problems. Lead the deployment of ML models into production environments with robustness and efficiency. Integrate models seamlessly with REST APIs for application integration. Provide technical guidance and strategic direction for AI initiatives across teams. Stay abreast of the latest AI/ML research, especially in NLP, deep learning, and large language models. Mentor junior team members and promote best practices in AI engineering. Collaborate with data engineers, software developers, and product teams to align AI solutions with business goals. Requirements: Proven experience designing scalable ML systems and architectures. Strong expertise in advanced statistical modeling techniques. Deep experience in model development and deployment pipelines. Proficiency in integrating ML models with REST APIs. Hands-on experience with cloud ML platforms, such as AWS SageMaker or Azure AI. Preferred, but not required: In-depth experience with Natural Language Processing (NLP), deep learning, and transformer-based models. Demonstrated leadership in formulating and executing AI strategies within organizations. Knowledge of the latest AI frameworks, including Transformers and Large Language Models (LLMs). We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture.
    $135k-145k yearly 2d ago
  • Backend Software Engineer

    V Group Inc. 4.2company rating

    New York, NY jobs

    Direct Client: Metropolitan Transportation Authority Job Title: Backend Software Engineer Duration: 06 Months Position Type: Contract (Part Time) Number of Hours: 25 Hrs/Week Interview Type: Webcam or In-Person Ceipal ID: MTA_JVM176_AK Requirement ID: 5176-1 ***This will be a hybrid role; 3 days on-site and 2 days remote.*** Description: The Digital Services team is seeking a part-time backend software engineer to help build out the future of data and technology for the MTA. This person will play a crucial role in shaping the daily commute of 3M+ New Yorkers. Our team is responsible for all realtime signage in the subway, the TrainTime app, the MTA app, and the processing systems that transform raw data into actionable information for passengers. Responsibilities: Independence and bias towards action, able to find scrappy solutions while keeping an eye to the future Product-focused engineering that's committed to getting the experience right for our riders Thoughtful collaboration: willing to work with engineers across the stack and cross-functionally with product and design Enthusiasm and curiosity about our transit system! Technical skills: Understanding of existing software development best practices Basic knowledge of platforms and systems commonly used in fullstack applications. For us, this includes Firebase, Netlify, Sentry and AWS. Experience with any of these in specific is a plus. Basic familiarity with JVM languages, RESTful APIs, message queues, networking Experience with GIS or location-based data and systems (including ESRI) is a plus Experience and education Bachelor's degree in computer science or related field is required. Demonstrated equivalent experience and education may be considered in lieu of the degree, subject to approval. Prior experience working on customer-facing applications. Must possess prior experience running projects, writing technical documents including scopes of work, software requirements, and estimates. Skills: Graphic Design for web. Technical Skills Software design principles. Technical Skills User Interface Design. V Group Inc. is a NJ-based IT Services and Products Company with its business strategically categorized in various Business Units including Public Sector, Enterprise Solutions, Professional Services, Ecommerce, Projects, and Products. Within Public Sector business unit, we cater IT Professional Services to Federal, State and Local. We have multiple awards/ contracts with 30+ states, including but not limited to NY, CA, FL, GA, MD, MI, NC, OH, OR, CO, CT, TN, PA, TX, VA, NM, VT, and WA. If you are considering applying for a position with V Group, or in partnering with us on a position, please feel free to contact me for any questions you may have regarding our services and the advantages we can offer you as a consultant. Please share my contact information with others working in Information Technology. Website: ************************************** LinkedIn: ***************************************** Facebook: ********************************* Twitter: *********************************
    $82k-110k yearly est. 3d ago
  • Senior Synon Developer

    Resource Informatics Group, Inc. 3.9company rating

    New York, NY jobs

    Hi, The following requirement is open with our client. Job Title: Senior Synon Developer Interview: Face to face Duration: 6+ months with a possibility of right to hire (It's onsite then It goes hybrid after the consulting phase) Job Description: Job Purpose/Role We are seeking a Senior Synon Developer to develop, enhance, and support our Member Benefits System (MBS). The ideal candidate will possess a comprehensive understanding of Synon functionality and the data exchange integrations we employ with other systems. Collaboration with a team of business analysts and internal customers will be essential in defining changes to the MBS system for new projects, existing code maintenance, and addressing production support issues. The role's primary responsibilities encompass system design, development, testing, problem analysis, and collaboration with internal customers and external vendors. Adhering to established methodologies, standards, and guidelines, including creating test plans and documentation, is crucial. We are particularly interested in individuals with expertise in the full lifecycle of application development. Experience/Skills: 8+ years of experience using CA:2E (Synon/2E, Advantage/2E, Cool:2e) toolset. 8+ years of development experience in application development with IBM-i (System i, i-Series, AS/400) using RPG, SQL, and CL tools. Experience in working with and supporting multiple concurrent projects/programs. Strong troubleshooting skills and diligence in the pursuit of issue resolutions. Code with a minimal amount of guidance and meet project deadlines. Experience with Synon/CM, Design Tracker, ServiceNow, Snowflake, MuleSoft, Showcase, Salesforce, PageDNA, LegaSuite, Java, Apex, Query Tools, or web design is a plus. Thanks & regards, K Bala Krishna Resource Manager Resource Informatics Group, Inc Email: ***************** LinkedIn: linkedin.com/in/bala-krishna-kunchapu-a7331221a Website: ****************
    $114k-155k yearly est. 1d ago
  • Senior Software Engineer

    Acquire Me 3.6company rating

    New York, NY jobs

    Senior Python Engineer I'm hiring a Senior Python Engineer for a high-growth alternative investment manager running large-scale systematic strategies for major family offices and institutional allocators. This is a senior, high-ownership engineering seat. The role sits at the centre of their core systems: backend services, distributed data pipelines, and AWS infrastructure that underpins the investment engine. What you'll lead: • Scaling backend services used across research, trading and client platforms • Architecture + implementation of AWS systems (Lambda, SQS, DynamoDB, Redshift, S3) • Building and maintaining distributed data pipelines and platform APIs • Driving reliability, performance and automation across core services • Partnering with senior engineers and PMs on technical direction What you'll bring: • 6-10+ years building production systems in Python • Deep AWS experience across infra + services • Strong with data tooling (Pandas / PyArrow / Polars) • Comfortable with event-driven + distributed architecture • Linux + Terraform/CloudFormation • Clear communicator who can operate autonomously If you want a profitable, well-run buy-side environment with modern tooling and genuine ownership - DM me.
    $101k-128k yearly est. 1d ago
  • Senior Dotnet Developer

    Prutech Solutions, Inc. 4.6company rating

    New York, NY jobs

    Application Developer Qualifications and Requirements: 14+ years of professional software development experience. Expert proficiency in C# and the .NET / .NET Core framework. 3+ years of experience working specifically with HL7 messaging standards (v2), including detailed knowledge of segments like PID, PV1, OBR, ORC, and message types like ORM (Orders) and ORU (Results). Demonstrable experience developing and deploying services using ASP.NET Core (Web API, Microservices). Strong understanding of modern architectural patterns (e.g., Microservices, Event-Driven Architecture). Proficiency in SQL and experience with SQL Server, including stored procedures and complex query optimization. Experience with SSIS packages. Experience with reporting tools such as SSRS, Power BI, or similar platforms. Familiarity with cloud platforms (preferably Azure, including App Services, Functions, and Service Bus/Event Hub). Bachelor's degree in computer science or a related field. EEOE
    $109k-144k yearly est. 3d ago
  • Data Engineer

    Jahnel Group 3.2company rating

    Schenectady, NY jobs

    LTI (Logic Technology, Inc.), the "Pro People" company, is a privately held technology solutions provider that offers best in class services to local, national and global organizations. Now after three decades, these initials have come to represent more than just our company name. They have also come to represent our hard-earned reputation for Leadership, Technology and Integrity. At LTI, we believe confident, motivated employees produce superior work, ensuring our client partnerships continue to thrive. We actively create an environment where great professionals want to be. We offer great benefits, interesting work and opportunities for personal development. Overview We are looking for a mid to senior level Data Engineer to design, build and support modern data solutions across cloud and machine learning platforms. This role focuses on creating scalable data pipelines, enabling ML workflows and driving the development of reliable, high-quality data systems within Azure. The ideal candidate brings strong analytical skills, hands-on engineering experience and a collaborative mindset to help shape the future of our data ecosystem. Responsibilities Build and maintain large-scale data pipelines that support analytics, reporting and machine learning initiatives Develop and optimize data workflows using Azure services such as Data Factory, Databricks, Functions and Azure Storage Collaborate with data scientists to enable feature engineering, prepare training datasets and support ML model deployment Integrate, cleanse and transform structured and unstructured data from a wide range of sources Design and implement scalable data models, schemas and storage patterns for batch and near-real-time processing Support ML/AI efforts using Python, Spark, MLflow, scikit-learn, TensorFlow or PyTorch Monitor pipeline performance, troubleshoot failures and ensure consistent data quality and reliability Contribute to CI/CD processes supporting both data engineering and machine learning automation Document data flows, integrations, design decisions and best practices Partner closely with cloud teams, software engineers and business stakeholders to deliver impactful and scalable data solutions Required Skills & Qualifications Strong experience working with Azure data services (ADF, Databricks, Synapse, Azure SQL, Azure Storage or similar) Proficiency in Python for data engineering and ML-related development Experience with machine learning frameworks such as scikit-learn, TensorFlow or PyTorch Hands-on background building ETL/ELT pipelines using Spark, Delta Lake or similar big-data tools Strong SQL skills including schema design, data modeling and performance tuning Solid understanding of version control, CI/CD practices and modern development workflows Ability to work effectively within cross-functional engineering and analytics teams Strong analytical thinking, troubleshooting abilities and attention to detail Experience supporting enterprise-scale environments or high-volume data systems Preferred Skills 6+ years of experience in data engineering or similar cloud/ML-focused roles Experience with distributed systems or large-scale data architectures Familiarity with ML lifecycle tools such as MLflow or Azure ML Exposure to data governance, cataloging or lineage solutions Understanding of DevOps practices, infrastructure automation or cloud security Experience contributing to process improvements, scaling data systems or optimizing data workflows Where We're Looking For It Schenectady, New York 100% Remote for the right candidate Other Information The work hours will be approximately 8:00 am to 5:00 pm EST, depending on workload, with the occasional late night when a tight deadline calls for it. We work for security-conscious clients, thus background checks will be required. Salary dependent upon experience.
    $95k-133k yearly est. Auto-Apply 12d ago
  • Senior Data Engineer - Hospitality & Travel - New York

    Truelogic 4.0company rating

    New York, NY jobs

    At Truelogic we are a leading provider of nearshore staff augmentation services headquartered in New York. For over two decades, we've been delivering top-tier technology solutions to companies of all sizes, from innovative startups to industry leaders, helping them achieve their digital transformation goals. Our team of 600+ highly skilled tech professionals, based in Latin America, drives digital disruption by partnering with U.S. companies on their most impactful projects. Whether collaborating with Fortune 500 giants or scaling startups, we deliver results that make a difference. By applying for this position, you're taking the first step in joining a dynamic team that values your expertise and aspirations. We aim to align your skills with opportunities that foster exceptional career growth and success while contributing to transformative projects that shape the future. Our Client Our client is completely redefining what it means to be a guest at a hotel. By offering day access to luxury hotel experiences, including breathtaking pools, private beaches, deluxe spas, and more, Our client allows people to escape - without ever leaving town. If you're moved to contribute to our vision, we'd love your help. Job Summary We are seeking our first in-house Data Engineer. In this role, you will work along with our Product Engineering, Business Intelligence and the leadership team to drive forward our data initiatives. You will be responsible for building out and maintaining our data infrastructure, ensuring our data stakeholders have access to high fidelity data in a timely manner, and that it's all maintained in a our client's is seeking our first in-house Data Engineer. In this role, you will work along with our Product Engineering, Business Intelligence and the leadership team to drive forward our data initiatives. You will be responsible for building out and maintaining our data infrastructure, ensuring our data stakeholders have access to high fidelity data in a timely manner, and that it's all maintained in a high performant data warehouse. Responsibilities Design, implement, and maintain robust ETL processes, ensuring data accuracy, timeliness, and accessibility for analysis. Collaborate with engineering teams to ensure comprehensive data instrumentation. Maintain our data warehouse by being the primary administrator and making sure it is well provisioned and optimized. Promote effective self-service analytics infrastructure Qualifications and Job Requirements 5+ years of data engineering experience, ideally in a high growth startup Strong understanding of cloud technologies (AWS preferred) Strong Experience ingesting data from REST APIs to a data warehouse or data lake using various modern tooling ranging from fundamental technologies like Airflow to self-serve tooling like Fivetran Strong understanding of data warehousing technologies (Redshift preferred) Expert-level SQL skills and exposure to data transformation technologies such as DBT Exposure to Looker or equivalent BI tools Demonstrated experience working cross team and cross functionally What We Offer Salary range 180-200k US/year 100% Remote Work: Enjoy the freedom to work from the location that helps you thrive. All it takes is a laptop and a reliable internet connection. Highly Competitive USD Pay: Earn an excellent, market-leading compensation in USD, that goes beyond typical market offerings. Paid Time Off: We value your well-being. Our paid time off policies ensure you have the chance to unwind and recharge when needed. Work with Autonomy: Enjoy the freedom to manage your time as long as the work gets done. Focus on results, not the clock. Work with Top American Companies: Grow your expertise working on innovative, high-impact projects with Industry-Leading U.S. Companies. Why You'll Like Working Here A Culture That Values You: We prioritize well-being and work-life balance, offering engagement activities and fostering dynamic teams to ensure you thrive both personally and professionally. Diverse, Global Network: Connect with over 600 professionals in 25+ countries, expand your network, and collaborate with a multicultural team from Latin America. Team Up with Skilled Professionals: Join forces with senior talent. All of our team members are seasoned experts, ensuring you're working with the best in your field. Apply now!
    $102k-145k yearly est. Auto-Apply 29d ago
  • Senior Data Management Professional - Data Engineer - Private Deals

    Bloomberg 4.8company rating

    New York, NY jobs

    Business Area Data Ref # 10047917 **Description & Requirements** Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint a complete picture for our clients-around the clock and around the world. In Data, we are responsible for delivering this data, news, and analytics through innovative technology-quickly and accurately. We apply product thinking, domain expertise, and technical insight to continuously improve our data offerings, ensuring they remain reliable, scalable, and fit-for-purpose in a fast-changing landscape. **Our Team:** The Private Deals Data team is responsible for Bloomberg's private transactions data model-an essential foundation for understanding global private markets. This data defines and enriches private companies, linking them to. Private company data plays a critical role in research, investment analysis, compliance, and due diligence. Integrating diverse datasets with our core private company data helps clients uncover insights, validate exposures, monitor evolving risks, and identify new opportunities in opaque and fast-growing markets. **The Role:** We are seeking a Data Engineer to design, build, and maintain the data pipelines, models, and integrations that power Bloomberg's private company data ecosystem. This role focuses on embedding M&A and Private Market data into Bloomberg's semantic model and analytical frameworks such as BQL-enabling richer discoverability and seamless integration across client workflows. You will develop robust ETL processes, manage schema mappings, and implement scalable data transformations to ensure consistency and reliability. You'll also contribute to enhancing Bloomberg's private company valuation product by extending the data model, implementing versioning, and ensuring data provenance is fully traceable. Working closely with Product, Engineering, and Data teams, you'll play a central role in shaping the technical foundation that underpins Bloomberg's evolving private market offerings. **You Will:** + Design, build, and maintain data pipelines and models to integrate M&A and Private data into Bloomberg's semantic model and BQL. + Perform schema mapping, normalization, and transformation to align diverse datasets with internal standards. + Implement scalable ETL processes ensuring completeness, accuracy, and transparency of data. + Extend the valuation data model to support company-level valuations, versioning, and auditability. + Partner with Product and Data teams to define and implement data quality and consistency checks. + Develop documentation and reusable frameworks for data ingestion and integration. + Support discoverability and search capabilities by ensuring data is optimized for BQL and function-layer exposure. + Contribute to continuous improvement of data engineering practices and tools. **You'll Need to Have:** _*We use years of experience as a guide but will consider all candidates who can demonstrate the_ _required skills._ + 3+ years of experience in data engineering, software engineering, or related fields. + Strong programming skills in Python, Java, or Scala, with proficiency in data processing frameworks (e.g., Spark, Flink, or Beam). + Solid understanding of data modeling, schema design, and ETL architecture. + Experience working with relational and columnar data stores (e.g., SQL, Postgres,BigQuery). + Familiarity with semantic data models or knowledge graphs. + Understanding of data versioning, provenance tracking, and metadata management. + Strong problem-solving and debugging skills with attention to scalability and performance. + Comfort working in collaborative, cross-functional environments. **We'd Love to See:** + Experience integrating financial or private market data. + Familiarity with Bloomberg Query Language (BQL) or similar query interfaces. + Understanding of valuation data, company financials, or transaction modeling. + Experience working in Agile teams. + Passion for building high-quality, transparent, and auditable data systems. **Does this sound like you?** Apply if you think we're a good match. We'll get in touch to let you know what the next steps are! Salary Range = 110000 - 190000 USD Annually + Benefits + Bonus The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level. We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation (exempt roles only), paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns. Discover what makes Bloomberg unique - watch our for an inside look at our culture, values, and the people behind our success. Bloomberg is an equal opportunity employer and we value diversity at our company. We do not discriminate on the basis of age, ancestry, color, gender identity or expression, genetic predisposition or carrier status, marital status, national or ethnic origin, race, religion or belief, sex, sexual orientation, sexual and other reproductive health decisions, parental or caring status, physical or mental disability, pregnancy or parental leave, protected veteran status, status as a victim of domestic violence, or any other classification protected by applicable law. Bloomberg is a disability inclusive employer. Please let us know if you require any reasonable adjustments to be made for the recruitment process. If you would prefer to discuss this confidentially, please email amer_*********************
    $119k-165k yearly est. Easy Apply 17d ago
  • Data Engineer

    Iex 3.6company rating

    New York, NY jobs

    IEX (IEX Group, Inc.) is an exchange operator and technology company dedicated to innovating for performance in capital markets. Founded in 2012, IEX launched a new kind of securities exchange in 2016 that combines a transparent business model and unique architecture designed to protect investors. Today, IEX applies its proprietary technology and experience to drive performance across asset classes, serve all investors, and advocate for transparent and competitive markets. Overview We are seeking a skilled Data Engineer to design, build, and maintain data solutions that support our options exchange. You will work with high-volume market data, ensuring that it is efficiently ingested, transformed, and made available for downstream systems. This role is ideal for someone with strong core data engineering skills and a background in financial services who is comfortable working in fast-paced, data-intensive environments. What You'll Do Build and maintain databases to ingest and manage data for the options exchange. Develop and optimize ETL processes (extract, transform, load), with a focus on data cleaning and reliability. Work closely with stakeholders to integrate and process diverse datasets related to market data. Ensure system performance, scalability, and reliability in a Linux-based environment. Collaborate with engineering and trading teams to deliver high-quality data solutions. Contribute to ongoing improvements in data architecture and pipeline efficiency. About You Strong experience in data engineering with exposure to financial services and market data. Hands-on experience with ETL processes and data pipeline development. Proficiency with KDB+/Q is required. Python is preferred for scripting and automation. Strong knowledge of Linux environments (must have). Familiarity with Java and C++ is a plus. Options market experience is a nice to have, but not required. Self-motivated, detail-oriented, and comfortable working both independently and collaboratively. Why you should apply: Comprehensive Benefits Unlimited PTO 100% coverage for medical, dental, and vision New hire stock equity (RSUs) 401K employer match OneMedical membership 16 weeks paid parental leave Flexible workplace Employer charity match Learning stipend Commuter benefits Jump Start onboarding program Internal mentor program cross-departmentally Friendly and inclusive workplace culture Our job titles may span more than one career level. The starting annual base pay is between $175,000 and $225,000 for this NY-based position. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The annual base pay range is subject to change and may be modified in the future. This role is eligible for bonus and equity. Here at IEX, we are dedicated to an inclusive workplace and culture. We are an Equal Opportunity Employer that does not discriminate on the basis of actual or perceived race, color, creed, religion, alienage or national origin, ancestry, citizenship status, age, disability or handicap, sex, marital status, veteran status, sexual orientation, genetic information or any other characteristic protected by applicable federal, state or local laws. This policy not only complies with all applicable
    $175k-225k yearly Auto-Apply 60d+ ago
  • Data Engineer III (ETL Datastage)

    Techaffinity Consulting 4.1company rating

    New York, NY jobs

    The ideal candidate will be responsible for working with business analyst, data engineers and upstream teams to understand impacts to data sources as the bank modernizes. Take the requirements and update/build ETL data pipelines using Datastage and DBT for ingestion into Financial Crimes applications. Perform testing and ensure data quality of updated data sources. Job Summary: Handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Required to know and understand the ins and outs of the industry such as data mining practices, algorithms, and how data can be used. Primary Responsibilities: Design, construct, install, test and maintain data management systems. Build high -performance algorithms, predictive models, and prototypes. Ensure that all systems meet the business/company requirements as well as industry practices. Integrate up -and -coming data management and software engineering technologies into existing data structures. Develop set processes for data mining, data modeling, and data production. Create custom software components and analytics applications. Research new uses for existing data. Employ an array of technological languages and tools to connect systems together. Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals. Install/update disaster recovery procedures. Recommend different ways to constantly improve data reliability and quality. Requirements Technical Degree or related work experience Experience with non -relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, etc.) Experience programming and/or architecting a back -end language (Java, J2EE, etc) Business Intellgience - Data Engineering ETL DataStage Developer SQL Strong communication skills, ability to collaborate with members of your team
    $96k-135k yearly est. 60d+ ago

Learn more about Mindex jobs