Post job

Data engineer jobs in Asheville, NC

- 10,041 jobs
All
Data Engineer
Requirements Engineer
Data Scientist
Engineer Of System Development
Software Systems Engineer
  • Observability Engineer

    Vantor

    Data engineer job in McLean, VA

    Vantor is forging the new frontier of spatial intelligence, helping decision makers and operators navigate what's happening now and shape what's coming next. Vantor is a place for problem solvers, changemakers, and go-getters-where people are working together to help our customers see the world differently, and in doing so, be seen differently. Come be part of a mission, not just a job, where you can: Shape your own future, build the next big thing, and change the world. To be eligible for this position, you must be a U.S. Person, defined as a U.S. citizen, permanent resident, Asylee, or Refugee. Note on Cleared Roles: If this position requires an active U.S. Government security clearance, applicants who do not currently hold the required clearance will not be eligible for consideration. Employment for cleared roles is contingent upon verification of clearance status. Export Control/ITAR: Certain roles may be subject to U.S. export control laws, requiring U.S. person status as defined by 8 U.S.C. 1324b(a)(3). Please review the job details below. This position requires an active U.S. Government Security Clearance at the TS/SCI level with required polygraph. We are looking for a full-time Observability Engineer (OE) to gain deeper insights to complex systems and cloud-native environments. This role is part of our data collection and software development team that ensures Vantor's services have reliability and up-time standards appropriate to customer's needs. The environment calls for a fast rate of improvement while keeping an ever-watchful eye on capacity, performance and cost. The OE will have the mindset and a set of engineering approaches to understand “the what” and “the why”. They will build monitoring solutions to gain visibility into operational problems, ensuring customer value and satisfaction is achieved. Their focus is to drive observability and monitoring for new and existing systems in order to provide systems insight and resolve application and infrastructure issues. The successful candidate has a breath of knowledge to discover, implement and collaborate with teammates on the implementation of solutions for complex problems across the entire technology stack. Responsibilities: Define standards for monitoring the reliability, availability, maintainability and performance of sponsor-owned and operated systems. Design and architect operational solutions for managing applications and infrastructure. Drive service acceptance by adopting new processes into operations and developing new monitoring for exposure of risks and automating against repeatable actions. Partner with service and product owners to establish key performance indicators to identify trends and achieve better outcomes. Provide deep troubleshooting for production issues. Engage with service owners to maximize a team's ability to identify and remediate root cause performance issues quickly ensuring rapid service interruption recovery. Build and/or use tools to correlate disparate data sets in an efficient and automated way to help teams quickly identify the root-cause to issues and to understand how different problems relate to each other. Coordinate with the sponsor to support major incidents, large-scale deployments and SecOps user support. Minimum Qualifications: US citizenship required Active/current TS/SCI with required polygraph Bachelor's degree in computer science or related area of study Minimum 5 years of experience Working knowledge of K8s, Docker, Helm and automated deployment via pipeline (e.g. Concourse or Jenkins) Familiarity with distributed control systems such as Git Experience with AWS cloud services Experience with setting up monitoring and observability solutions across sponsor owned systems, tools and data feeds Proficient in scripting with Python and Java Willingness to work onsite full time Ability and willingness to share on-call responsibilities Advanced knowledge of Unix/Linux systems, with high comfort level at the command line Preferred Qualifications: Experience with other cloud services providers beyond AWS Experience with CloudWatch or other monitoring tools inside of AWS Familiarity with Prometheus/Grafana or other monitoring tools for ETL feeds, APIs, servers, C2S servies, networks and AI/ML capabilities Good understanding of networking fundamentals Organized with an ability to document and communicate ongoing work tasks and projects Receptive to giving, receiving and implementing feedback in a highly collaborative environment Understanding of Incident and Problem Management Effectively prioritize work and encourage best practices in others Meticulous and cautious with the ability to identify and consider all risks and balance those with performing the task efficiently Experience with Root Cause Analysis (RCA) Experience with ETL processes Willingness to step in as a leader to address ongoing incidents and problems, while providing guidance to others in order to drive to a resolution Pay Transparency: In support of pay transparency at Vantor, we disclose salary ranges on all U.S. job postings. The successful candidate's starting pay will fall within the salary range provided below and is determined based on job-related factors, including, but not limited to, the experience, qualifications, knowledge, skills, geographic work location, and market conditions. Candidates with the minimum necessary experience, qualifications, knowledge, and skillsets for the position should not expect to receive the upper end of the pay range. The base pay for this position within California, Colorado, Hawaii, New Jersey, the Washington, DC metropolitan area, and for all other states is: $180,000.00 - $220,000.00 Benefits: Vantor offers a competitive total rewards package that goes beyond the standard, including a robust 401(k) with company match, mental health resources, and unique perks like student loan repayment assistance, adoption reimbursement and pet insurance to support all aspects of your life. You can find more information on our benefits at: ****************************** The application window is three days from the date the job is posted and will remain posted until a qualified candidate has been identified for hire. If the job is reposted regardless of reason, it will remain posted three days from the date the job is reposted and will remain reposted until a qualified candidate has been identified for hire. The date of posting can be found on Vantor's Career page at the top of each job posting. To apply, submit your application via Vantor's Career page. EEO Policy: Vantor is an equal opportunity employer committed to an inclusive workplace. We believe in fostering an environment where all team members feel respected, valued, and encouraged to share their ideas. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability, protected veteran status, age, or any other characteristic protected by law.
    $67k-90k yearly est. 2d ago
  • System Development Engineer II, DBS Relational ADC

    Amazon Development Center U.S., Inc. 4.7company rating

    Data engineer job in Herndon, VA

    System Development Engineer The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing with AWS offerings like RDS and Aurora. In 2013, AWS launched 280 services, but in 2016 alone we released nearly 1000. We hold high standards for our computer systems and the services we deliver to our customers: our systems are highly secure, highly reliable, highly available, all while functioning at massive scale; our employees are smart, passionate about the cloud, driven to serve customers, and fun to work with. A successful engineer joining the team will do much more than write code and triage problems. They will work with Amazon's largest and most demanding customers to address specific needs across a full suite of services. They will dive deeply into technical issues and work diligently to improve the customer experience. The ideal candidate will... - Be great fun to work with. Our company credo is "Work hard. Have fun. Make history". The right candidate will love what they do and instinctively know how to make work fun. - Have strong Linux & Networking Fundamentals. The ideal candidate will have deep experience working with Linux, preferably in a large scale, distributed environment. You understand networking technology and how servers and networks inter-relate. You regularly take part in deep-dive troubleshooting and conduct technical post-mortem discussions to identify the root cause of complex issues. - Love to code. Whether its building tools in Java or solving complex system problems in Python, the ideal candidate will love using technology to solve problems. You have a solid understanding of software development methodology and know how to use the right tool for the right job. - Think Big. The ideal candidate will build and deploy solutions across thousands of devices. You will strive to improve and streamline processes to allow for work on a massive scale. This position requires that the candidate selected must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. 10012 Key job responsibilities - You design, implement, and deploy software components and features. You solve difficult problems generating positive feedback. - You have a solid understanding of design approaches (and how to best use them). - You are able to work independently and with your team to deliver software successfully. - Your work is consistently of a high quality (e.g., secure, testable, maintainable, low-defects, efficient, etc.) and incorporates best practices. Your team trusts your work. - Your code reviews tend to be rapid and uneventful. You provide useful code reviews for changes submitted by others. - You focus on operational excellence, constructively identifying problems and proposing solutions, taking on projects that improve your team's software, making it better and easier to maintain. - You make improvements to your team's development and testing processes. - You have established good working relationships with peers. You recognize discordant views and take part in constructive dialogue to resolve them. - You are able to confidently train new team-mates about your customers, what your team's software does, how it is constructed, tested, operates, and how it fits into the bigger picture. A day in the life Engineers in this role will work on automation, development, and operations to support AWS machine learning services for US government customers. They will work in an agile environment, attend daily standup, and collaborate closely with teammates. They will work on exciting challenges at scale and tackle unsolved problems. They will support the U.S. Intelligence Community and Defense agencies to implement innovative cloud computing solutions and solve unique technical problems. About the team Why AWS Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations - from foundational services such as Amazon's Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS's services and features apart in the industry. As a member of the UC organization, you'll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Mentorship and Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. BASIC QUALIFICATIONS- Bachelor's degree in computer science or equivalent - 3+ years of non-internship professional software development experience - Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby - Knowledge of systems engineering fundamentals (networking, storage, operating systems) - 1+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience - Current, active US Government Security Clearance of TS/SCI with Polygraph PREFERRED QUALIFICATIONS- Experience with PowerShell (preferred), Python, Ruby, or Java - Experience working in an Agile environment using the Scrum methodology Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $116,300/year in our lowest geographic market up to $201,200/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $116.3k-201.2k yearly 1d ago
  • Cybersecurity Engineer III **

    Simventions, Inc.-Glassdoor ✪ 4.6

    Data engineer job in Virginia Beach, VA

    SimVentions, consistently voted one Virginia's Best Places to Work, is looking for an experienced cybersecurity professional to join our team! As a Cybersecurity Engineer III, you will play a key role in advancing cybersecurity operations by performing in-depth system hardening, vulnerability assessment, and security compliance activities in accordance with DoD requirements. The ideal candidate will have a solid foundation in cybersecurity practices and proven experience supporting both Linux and Windows environments across DoD networks. You will work collaboratively with Blue Team, Red Team, and other Cybersecurity professionals on overall cyber readiness defense and system accreditation efforts. ** Position is contingent upon award of contract, anticipated in December of 2025. ** Clearance: An ACTIVE Secret clearance (IT Level II Tier 5 / Special-Sensitive Position) is required for this position. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. US Citizenship is required to obtain a clearance. Requirements: In-depth understanding of computer security, military system specifications, and DoD cybersecurity policies Strong ability to communicate clearly and succinctly in written and oral presentations Must possess one of the following DoD 8570.01-M IAT Level III baseline certifications: CASP+ CE CCNP Security CISA CISSP (Associate) CISSP GCED GCIH CCSP Responsibilities: Develop Assessment and Authorization (A&A) packages for various systems Develop and maintain security documentation such as: Authorization Boundary Diagram System Hardware/Software/Information Flow System Security Plan Privacy Impact Assessment e-Authentication Implementation Plan System Level Continuous Monitoring Plan Ports, Protocols and Services Registration Plan of Action and Milestones (POA&M) Conduct annual FISMA assessments Perform Continuous Monitoring of Authorized Systems Generate and update test plans; conduct testing of the system components using the Assured Compliance Assessment Solution (ACAS) tool, implement Security Technical Implementation Guides (STIG), and conduct Information Assurance Vulnerability Management (IAVM) reviews Perform automated ACAS scanning, STIG, SCAP checks (Evaluate STIG, Tenable Nessus, etc.) on various standalone and networked systems Analyze cybersecurity test scan results and develop/assist with documenting open findings in the Plan of Action and Milestones (POA&M) Analyze DISA Security Technical Implementation Guide test results and develop/assist with documenting open findings in the Plan of Action and Milestones Preferred Skills and Experience: A combined total of ten (10) years of full-time professional experience in all of the following functional areas: Computer security, military system specifications, and DoD cybersecurity policies National Cyber Range Complex (NCRC) Total Ship Computing Environment (TSCE) Program requirements and mission, ship install requirements, and protocols (preferred) Risk Management Framework (RMF), and the implementation of Cybersecurity and IA boundary defense techniques and various IA-enabled appliances. Examples of these appliances and applications are Firewalls, Intrusion Detection System (IDS), Intrusion Prevention System (IPS), Switch/Routers, Cross Domain Solutions (CDS), EMASS and, Endpoint Security Solution (ESS) Performing STIG implementation Performing vulnerability assessments with the ACAS tool Remediating vulnerability findings to include implementing vendor patches on both Linux and Windows Operating systems Education: Bachelor of Science in Information Systems, Bachelor of Science in Information Technology, Bachelor of Science in Computer Science, Bachelor of Science in Computer Engineering Compensation: Compensation at SimVentions is determined by a number of factors, including, but not limited to, the candidate's experience, education, training, security clearance, work location, skills, knowledge, and competencies, as well as alignment with our corporate compensation plan and contract specific requirements. The projected annual compensation range for this position is $90,000 - $140,000 (USD). This estimate reflects the standard salary range for this position and is just one component of the total compensation package that SimVentions offers. Benefits: At SimVentions, we're committed to supporting the total well-being of our employees and their families. Our benefit offerings include comprehensive health and welfare plans to serve a variety of needs. We offer: Medical, dental, vision, and prescription drug coverage Employee Stock Ownership Plan (ESOP) Competitive 401(k) programs Retirement and Financial Counselors Health Savings and Health Reimbursement Accounts Flexible Spending Accounts Life insurance, short- & long-term disability Continuing Education Assistance Paid Time Off, Paid Holidays, Paid Leave (e.g., Maternity, Paternity, Jury Duty, Bereavement, Military) Third Party Employee Assistance Program that offers emotional and lifestyle well-being services, to include free counseling Supplemental Benefit Program Why Work for SimVentions?: SimVentions is about more than just being a place to work with other growth-orientated technically exceptional experts. It's also a fun place to work. Our family-friendly atmosphere encourages our employee-owners to imagine, create, explore, discover, and do great things together. Support Our Warfighters SimVentions is a proud supporter of the U.S. military, and we take pride in our ability to provide relevant, game-changing solutions to our armed men and women around the world. Drive Customer Success We deliver innovative products and solutions that go beyond the expected. This means you can expect to work with a team that will allow you to grow, have a voice, and make an impact. Get Involved in Giving Back We believe a well-rounded company starts with well-rounded employees, which is why we offer diverse service opportunities for our team throughout the year. Build Innovative Technology SimVentions takes pride in its innovative and cutting-edge technology, so you can be sure that whatever project you work on, you will be having a direct impact on our customer's success. Work with Brilliant People We don't just hire the smartest people; we seek experienced, creative individuals who are passionate about their work and thrive in our unique culture. Create Meaningful Solutions We are trusted partners with our customers and are provided challenging and meaningful requirements to help them solve. Employees who join SimVentions will enjoy additional perks like: Employee Ownership: Work with the best and help build YOUR company! Family focus: Work for a team that recognizes the importance of family time. Culture: Add to our culture of technical excellence and collaboration. Dress code: Business casual, we like to be comfortable while we work. Resources: Excellent facilities, tools, and training opportunities to grow in your field. Open communication: Work in an environment where your voice matters. Corporate Fellowship: Opportunities to participate in company sports teams and employee-led interest groups for personal and professional development. Employee Appreciation: Multiple corporate events throughout the year, including Holiday Events, Company Picnic, Imagineering Day, and more. Founding Partner of the FredNats Baseball team: Equitable distribution of tickets for every home game to be enjoyed by our employee-owners and their families from our private suite. Food: We have a lot of food around here! FTAC
    $90k-140k yearly 2d ago
  • Data Engineer

    The Ash Group

    Data engineer job in Falls Church, VA

    *** W2 Contract Only - No C2C - No 3rd Parties *** The Ash Group is hiring a new Programmer Analyst Principal (Data Engineer) for our client (a global leader providing advanced systems and support in defense, aerospace, and security) based in Falls Church, VA. In this role, you'll be designing, implementing, and optimizing large-scale data systems and ETL pipelines, with a strong focus on using Amazon Redshift and AWS services to ensure data quality and integrity for complex defense programs. Compensation, Benefits, and Role Info Competitive pay rate of $65 per hour. Medical, dental, vision, direct primary care benefits, and, after six months of employment, a 4% matched 401(k) plan with immediate 100% vesting. Type: 12-month contract with potential extension or conversion. Location: On-site in Falls Church, VA. What You'll Be Doing Design and implement large-scale ETL data pipelines using AWS Glue and Python/PySpark to ingest, transform, and load data from various sources. Build and maintain robust data warehouses, focusing on Amazon Redshift, including data modeling and governance. Write and optimize complex, highly-performant SQL queries across large datasets (Redshift, Oracle, SQL Server). Collaborate with cross-functional teams (data scientists, analysts) to understand requirements and deliver end-to-end data solutions. Troubleshoot, optimize performance, and resolve data-related issues like pipeline failures and data quality bottlenecks. What We're Looking For 8+ years of hands-on experience in data engineering, focusing on designing and implementing large-scale data systems. 5+ years of experience in building production-level ETL pipelines using AWS Glue and Python/PySpark. Deep proficiency in SQL, including query optimization, indexing, and performance tuning across data warehouses like Amazon Redshift. Strong understanding of database design principles, data modeling (star/snowflake schemas), and data governance. Experience with data processing/orchestration frameworks such as Apache Airflow, Apache Kafka, or Fivetran. If you're a seasoned data engineering professional passionate about building scalable data solutions and driving innovation in cloud-based environments, we want to hear from you. This is an exciting opportunity to work on cutting-edge technologies, collaborate with cross-functional teams, and make a meaningful impact on data-driven decision-making. Apply now to be part of a forward-thinking organization where your expertise will shape the future of our data infrastructure. #DataEngineer #DataEngineering #AWSEngineer #Redshift #ETL #PySpark #DataPipeline #Westminster #ColoradoJobs #Contract
    $65 hourly 23h ago
  • Data Scientist with GenAI and Python

    Dexian

    Data engineer job in Charlotte, NC

    Dexian is seeking a Data Scientist with GenAI and Python for an opportunity with a client located in Charlotte, NC. Responsibilities: Design, develop, and deploy GenAI models, including LLMs, GANs, and transformers, for tasks such as content generation, data augmentation, and creative applications Analyze complex data sets to identify patterns, extract meaningful features, and prepare data for model training, with a focus on data quality for GenAI Develop and refine prompts for LLMs, and optimize GenAI models for performance, efficiency, and specific use cases Deploy GenAI models into production environments, monitor their performance, and implement strategies for continuous improvement and model governance Work closely with cross-functional teams (e.g., engineering, product) to understand business needs, translate them into GenAI solutions, and effectively communicate technical concepts to diverse stakeholders Stay updated on the latest advancements in GenAI and data science, and explore new techniques and applications to drive innovation within the organization Utilize Python and its extensive libraries (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, LangChain) for data manipulation, model development, and solution implementation Requirements: Proven hands-on experience implementing Gen AI project using open source LLMs (Llama, GPT OSS, Gemma, Mistral) and proprietary API's (OpenAI, Anthropic) Strong background in Retrieval Augmented Generation implementations In depth understanding of embedding models and their applications Hands on experience in Natural Language Processing (NLP) solutions on text data Strong Python development skills. Should be comfortable with Pandas and NumPy for data analysis and feature engineering Experience building and integrating APIs (REST, FastAPI, Flask) for serving models Fine tuning and optimizing open source LLM/SLM is a big plus Knowledge of Agentic AI frameworks and Orchestration Experience in ML and Deep Learning is an advantage Familiarity with cloud platforms (AWS/Azure/GCP) Experience working with Agile Methodology Strong problem solving, analytical and interpersonal skills Ability to work effectively in a team environment Strong written and oral communication skills Should have the ability to clearly express ideas Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals. Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit ******************* to learn more. Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
    $68k-95k yearly est. 3d ago
  • Data Scientist

    Parker's Kitchen 4.2company rating

    Data engineer job in Savannah, GA

    We are looking for a Data Scientist with expertise in optimization and forecasting to help improve how we manage labor, staffing, and operational resources across our retail locations. This role is critical in building models and decision-support tools that ensure the right people, in the right place, at the right time - balancing customer service, efficiency, and cost. You will work closely with Operations, Finance, and Store Leadership teams to deliver practical solutions that improve labor planning, scheduling, and demand forecasting. The right candidate will be confident, resourceful, and excited to own both the technical and business-facing aspects of applying data science in a fast-paced retail environment. Responsibilities Build and maintain forecasting models (time-series, machine learning, and statistical) for sales and transactions. Develop and deploy optimization models (linear/mixed-integer programming, heuristics, simulation) to improve workforce scheduling and labor allocation. Partner with operations and finance to translate forecasts into actionable staffing and labor plans that reduce costs while maintaining service levels. Build dashboards and automated tools to track forecast accuracy, labor KPIs, and staffing effectiveness. Provide insights and “what-if” scenario modeling to support strategic workforce and budget planning. Knowledge, Skills, And Abilities Strong foundation in forecasting techniques (time-series models, regression, machine learning) and optimization methods (linear/mixed-integer programming, heuristics, simulation). Proficiency in Python or R for modeling and analysis, along with strong SQL skills for working with large-scale datasets. Knowledge of statistics, probability, and applied mathematics to support predictive and prescriptive modeling. Experience building and deploying predictive models, optimization tools, and decision-support solutions that drive measurable business outcomes. Strong data storytelling and visualization skills using tools such as Power BI, Tableau, or Looker. Ability to translate analytical outputs into clear, actionable recommendations for non-technical stakeholders. Strong collaboration skills with the ability to partner cross-functionally with Operations, Finance, and Store Leadership to drive adoption of data-driven approaches. Ability to work independently and resourcefully, combining technical depth with practical problem-solving to deliver results in a fast-paced environment. Education And Requirements Required: Bachelor's or Master's degree in Data Science, Statistics, Applied Mathematics, Industrial Engineering, Operations Research, or related field. Minimum 2-3 years of professional experience in Data Science or a related area. Strong skills in time-series forecasting (e.g., ARIMA, Prophet, ML-based approaches). Proficiency in optimization techniques (linear programming, integer programming). Strong Python or R programming skills. SQL expertise for large, complex datasets. Strong communication skills with the ability to partner with business stakeholders. Preferred Experience in Retail, Restaurant, and/or Convenience Stores a plus. Experience with cloud platforms (Snowflake, AWS, GCP, Azure). Knowledge of BI tools (Tableau, Power BI, Looker). Physical Requirements Prolonged periods sitting/standing at a desk and working on a computer Must be able to lift up to 50 pounds Parker's is an equal opportunity employer committed to hiring a diverse workforce and sustaining an inclusive culture. Parker's does not discriminate on the basis of disability, veteran status or any other basis protected under federal, state, or local laws.
    $73k-100k yearly est. 23h ago
  • Data Scientist

    Astec Digital

    Data engineer job in Chattanooga, TN

    BUILT TO CONNECT At Astec, we believe in the power of connection and the importance of building long-lasting relationships with our employees, customers and the communities we call home. With a team more than 4,000 strong, our employees are our #1 advantage. We invest in skills training and provide opportunities for career development to help you grow along with the business. We offer programs that support physical safety, as well as benefits and resources to enhance total health and wellbeing, so you can be your best at work and at home. Our equipment is used to build the roads and infrastructure that connects us to each other and to the goods and services we use. We are an industry leader known for delivering innovative solutions that create value for our customers. As our industry evolves, we are using new technology and data like never before. We're looking for creative problem solvers to build the future with us. Connect with us today and build your career at Astec. LOCATION: Chattanooga, TN On-site / Hybrid (Role must report on-site regularly) ABOUT THE POSITION The Data Scientist will play a key role in establishing the analytical foundation of Astec Smart Services. This individual will lead efforts to build pipelines from source to cloud, define data workflows, build predictive models, and help guide the team's approach to turning data into customer value. He or she will work closely within Smart Services and cross-functionally to ensure insights are actionable and impactful. The role blends Data architecture, data engineering, and data science to help build Smart Services analytical foundation. This person will be instrumental in helping to build Astec's digital transformation and aftermarket strategy. Deliverables & Responsibilities Data Engineering: Build and maintain robust data pipelines for ingestion, transformation, and storage. Optimize ETL processes for scalability and performance. Data Architecture: Design and implement data models that support analytics and operational needs. Define standards for data governance, security, and integration. Data Science: Develop predictive models and advanced analytics to support business decisions. Apply statistical and machine learning techniques to large datasets. Strong business acumen to understand decision drivers with internal and external customers Collaborate with individuals and departments across the company to ensure insights are aligned with customer needs and drive value. To be successful in this role, your experience and competencies are: Bachelor's degree in data science, engineering, or related field. (Adv. degrees a plus.) 5+ years of experience in data science, including at least 3 years in industrial or operational environments. Strong communication and project management skills are critical. Proficiency in data pipeline tools (e.g., Spark, Airflow) and cloud platforms (Azure, AWS, GCP). Strong understanding of data modeling principles and database technologies (SQL/NoSQL). Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and statistical analysis. Ability to work across data architecture design and data science experimentation. Programming: Python, SQL, and optionally Scala or Java. Familiarity with distributed systems and big data technologies. Strong communication skills for translating technical insights into business value. Ability to work across technical, commercial, and customer-facing teams. Supervisor and Leadership Expectations This role will not have supervisory or managerial responsibilities. This role will have program management responsibilities. Our Culture and Values Employees that become part of Astec embody the values below throughout their work. Continuous devotion to meeting the needs of our customers Honesty and integrity in all aspects of business Respect for all individuals Preserving entrepreneurial spirit and innovation Safety, quality and productivity as means to ensure success EQUAL OPPORTUNITY EMPLOYER As an Equal Opportunity Employer, Astec does not discriminate on the basis of race, creed, color, religion, gender (sex), sexual orientation, gender identity, marital status, national origin, ancestry, age, disability, citizenship status, a person's veteran status or any other characteristic protected by law or executive order.
    $68k-94k yearly est. 2d ago
  • Data Scientist

    Coforge

    Data engineer job in Atlanta, GA

    Role: Data Scientist Mode Of Hire: Full Time Key Responsibilities • Data wrangling & feature engineering: Ingest, clean, and transform data from SQL, APIs, and data lakes (e.g., Snowflake, Databricks). Design robust pipelines that feed into analytics and ML workflows. • Data understanding & exploration: Work closely with domain experts to deeply understand the meaning, context, quality, and limitations of available datasets. Translate business questions into data requirements and analytics plans. • Machine learning development: Build, tune, and validate predictive models using scikit-learn, SparkML, XGBoost, or TensorFlow. • Cross-functional partnership: Collaborate with marketing, sales, and product teams to scope business use cases, define success metrics, and integrate models into operational workflows. • Model deployment & MLOps: Deploy and manage models using MLflow, docker and CI/CD pipelines. Implement versioning, testing, performance monitoring, and retraining strategies as part of a robust MLOps practice. • Infrastructure support: Work with data engineering and DevOps teams to maintain and improve model training and deployment infrastructure, including compute resources, workflow orchestration and environment configuration. • Insight delivery: Build clear, actionable reporting and visualizations using tools like Power BI or Tableau. Focus on impact, not just analysis. Skills Required: • Bachelor's degree in Data Science, Computer Science, Engineering, or a related quantitative field. • 5+ years of experience in a data science, ML engineering, or analytics role. • Strong SQL, Python and ML Techniques programming skills. • Experience with Azure Cloud, Databricks, and/or Snowflake. • Experience building and deploying machine learning models in production environments. Hands-on experience with Databricks, including SparkML, and MLflow integration. • Familiarity with MLOps best practices, including version control, model monitoring, and automated testing. • Experience with tools such as Git, MLflow, Docker and workflow schedulers. • Ability to communicate complex technical work to non-technical stakeholders. • Experience with scalable model training environments and distributed computing. Preferred Qualifications • Master's degree in a quantitative or technical discipline. • Experience in financial services, fintech, or enterprise B2B analytics. • Knowledge of A/B testing, causal inference, and statistical experimentation. • Familiarity with GenAI, LLM pipelines, and vector-based retrieval is a plus and platform like Snowflake Cortex.
    $66k-91k yearly est. 2d ago
  • Snowflake Data Engineer

    Covetus 3.8company rating

    Data engineer job in Durham, NC

    Experience in developing and proficient in SQL and knowledge on Snowflake cloud computing environments Knowledge on Data warehousing concepts and metadata management Experience with data modeling, Data lakes multi-dimensional models and data dictionaries Hands-on experience with Snowflake features like Time Travel and Zero-Copy Cloning. Experience in query performance tuning and cost optimization in a cloud data platform Knowledge in Snowflake warehousing, architecture, processing and administration , DBT , Pipeline Hands-on experience on PLSQL Snowflake •Excellent personal communication, leadership, and organizational skills. •Should be well versed with various Design patterns Knowledge of SQL database is a plus Hands-on Snowflake development experience is must Work with various cross-functional groups, tech leads from other tracks Need to work with team closely and guide them technically/functionally Must be a team player with good attitude
    $74k-97k yearly est. 23h ago
  • Data Conversion Engineer

    Paymentus 4.5company rating

    Data engineer job in Charlotte, NC

    Summary/Objective Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded. About the Role The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform. Responsibilities Develop data conversion procedures using SQL, Java and Linux scripting Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications Develop new specifications to satisfy new customers and products Serve as the primary point of contact/driver for all technical related conversion activities Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates Maintain and update technical conversion documentation to share with internal and external clients and partners Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills Adapt and creatively solve encountered problems under high stress and tight deadlines Learn database structure, business logic and combine all knowledge to improve processes Be flexible Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication Management of multiple projects and conversion implementations Ability to proactively troubleshoot and solve problems with limited supervision Qualifications B.S. Degree in Computer Science or comparable experience Strong knowledge of Linux and the command line interface Exceptional SQL skills Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.) Familiarity with various online banking applications and understanding of third-party integrations is a plus Effective written and verbal communication skills Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels Strong attention to detail Proficiency with raw data, analytics, or data reporting tools Preferred Skills Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries Experience with front end web interfaces (HTML5, Javascript, CSS3) Cloud technologies (AWS, GCP, Azure) Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones. Physical Demands This role requires sitting or standing at a computer workstation for extended periods of time. Position Type/Expected Hours of Work This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand. Travel No travel is required for this position. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Equal Opportunity Statement Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment. Reasonable Accommodation Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
    $82k-114k yearly est. 23h ago
  • W2 Opportunity // GCP Data Engineer // Atlanta, GA

    Cloudingest

    Data engineer job in Atlanta, GA

    Job Description: GCP Data Engineer Rate: $50/hr. on W2 (No C2C) We are seeking a highly skilled GCP Data Engineer to design, build, and optimize cloud-native data pipelines and analytics solutions on Google Cloud Platform. The ideal candidate has strong experience with Python, BigQuery, Cloud Data Fusion, and core GCP services such as Cloud Composer, Cloud Storage, Cloud Functions, and Pub/Sub. This role requires a strong foundation in data warehousing concepts and scalable data engineering practices. Responsibilities Design, develop, and maintain robust ETL/ELT pipelines on Google Cloud Platform. Build and optimize data workflows using Cloud Data Fusion, BigQuery, and Cloud Composer. Write efficient and maintainable Python code to support data ingestion, transformation, and automation. Develop optimized BigQuery SQL for analytics, reporting, and large-scale data modeling. Utilize GCP services such as Cloud Storage, Pub/Sub, and Cloud Functions to build event-driven and scalable data solutions. Ensure data quality, governance, and reliability across all pipelines. Collaborate with cross-functional teams to deliver clean, trusted, production-ready datasets. Monitor, troubleshoot, and resolve performance issues in cloud data pipelines and workflows. Must-Have Skills Strong experience with GCP BigQuery (data modeling, SQL development, performance tuning). Proficiency in Python for data engineering and pipeline automation. Hands-on experience with Cloud Data Fusion for ETL/ELT development. Working experience with key GCP services: Cloud Composer Cloud Storage Cloud Functions Pub/Sub Strong understanding of data warehousing concepts, star/snowflake schemas, and best practices. Solid understanding of cloud data architecture and distributed processing. Good-to-Have Skills Experience with Vertex AI for ML pipeline integration or model deployment. Familiarity with Dataproc (Spark/Hadoop) for large-scale processing. Knowledge of CI/CD workflows, Git, and DevOps best practices. Experience with Cloud Logging/Monitoring tools.
    $50 hourly 1d ago
  • Cloud Data Engineer- Databricks

    Infocepts 3.7company rating

    Data engineer job in McLean, VA

    Purpose: We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions. Key Result Areas and Activities: Design and implement robust, scalable data engineering solutions. Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI. Collaborate with analytics and AI teams to enable real-time and batch data workflows. Support and improve cloud-native data platforms (AWS, Azure, GCP). Ensure adherence to best practices in data modeling, warehousing, and governance. Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices. Implement and maintain workflow orchestration tools like Apache Airflow and dbt. Roles & Responsibilities Essential Skills 4+ years of experience in data engineering with a focus on scalable solutions. Strong hands-on experience with Databricks in a cloud environment. Proficiency in Spark and Python for data processing. Solid understanding of data modeling, data warehousing, and architecture principles. Experience working with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with CI/CD pipelines and data workflow automation. Desirable Skills Direct experience with Unity Catalog and Mosaic AI within Databricks. Working knowledge of DevOps/DataOps principles in a data engineering context. Exposure to Apache Airflow, dbt, and modern data orchestration frameworks. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $77k-105k yearly est. 23h ago
  • Data Engineer

    A2C 4.7company rating

    Data engineer job in Alpharetta, GA

    5 days onsite in Alpharetta, GA Skills required: Python Data Pipeline Data Analysis Data Modeling Must have solid Cloud experience AI/ML Strong problem-solving skills Strong Communication skill A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies. Solid understanding and hands on experience with major cloud platforms. Experience in designing and implementing data pipelines. Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
    $77k-106k yearly est. 1d ago
  • Data Engineer - OrcaWorks AI

    Orcaworks.Ai

    Data engineer job in Atlanta, GA

    Experience Level: Entry-level (Master's preferred) About OrcaWorks AI At OrcaWorks AI, we're building next-generation AI systems that empower businesses to make data-driven decisions with intelligence and speed. We're seeking passionate Data Engineers who love solving real-world data challenges and want to be part of a growing team building cutting-edge AI infrastructure. Key Responsibilities Design, develop, and maintain data pipelines using tools like Airbyte and Prefect to feed AI and machine learning models. Integrate data from multiple structured and unstructured sources into unified and queryable layers using ElasticSearch or Vespa. Implement data validation, transformation, and storage solutions using modern ETL frameworks. Collaborate with AI, LLM, and data science teams to ensure reliable and optimized data flow for model training. Support database management, SQLModel, and data governance practices across services. Required Skills & Qualifications Master's degree (or Bachelor's with equivalent experience) in Computer Science, Information Systems, or Data Engineering. Proficiency in Python and SQL; experience with PySpark or equivalent ETL frameworks. Hands-on experience with Airbyte, Prefect, and DBT. Familiarity with search and indexing systems like Vespa or ElasticSearch. Knowledge of cloud data platforms (AWS, GCP, or Azure) and API integration. Strong understanding of data security and applied AI workflows.
    $75k-100k yearly est. 3d ago
  • Lead Azure Databrick Engineer

    Syren

    Data engineer job in Atlanta, GA

    ****************Individual Contractors (W2/1099) are encouraged to apply. Visa sponsorship is not available for this role at this time************ An Azure Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders' needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Must Have Skills: Experience of Migrating from other platform to Databricks Proficiency in Databricks and Azure Cloud, Databricks Asset Bundles, Hoslistic vision on the Data Strategy. Proficiency in Data Streaming and Data Modeling Experience in architecting at least two large-scale big data projects Strong understanding of data scaling and its complexities Data Archiving and Purging mechanisms. Job Requirements • Degree in computer science or equivalent preferred • Demonstrable experience in architecture, design, implementation, and/or support of highly distributed applications with Azure cloud and Databricks. • 10+ Years of Hands-on experience with data modelling, database design, data mining, and segmentation techniques. • Working knowledge and experience with "Cloud Architectures" (e.g., SaaS, PaaS, IaaS) and the ability to address the unique security considerations of secure Cloud computing • Should have architected solutions for Cloud environments such as Microsoft Azure and/or GCP • Experience with debugging and performance tuning in distributed environments • Strong analytical skills with the ability to collect, organize, analyse, and broadcast significant amounts of information with attention to detail and accuracy • Experience dealing with structured, unstructured data. • Must have Python, PySpark experience. • Experience in ML or/and graph analysis is a plus
    $75k-100k yearly est. 2d ago
  • Lead Data Engineer - Palantir Foundry

    Smurfit Westrock

    Data engineer job in Atlanta, GA

    Our technology organization is transforming how we work at WestRock. We align with our businesses to deliver innovative solutions that: Address specific business challenges, integrate processes, and create great experiences Connect our work to shared goals that propel WestRock forward in the Digital Age Imagine how technology can advance the way we work by using disruptive technology We are looking for forward thinking technologists that can accelerate our focus areas such as building stronger foundational technology capabilities, reducing complexity, employing digital transformation concepts, and leveraging disruptive technology. As a Lead Data Engineer, you will play a pivotal role in building and scaling modern data infrastructure that powers decision-making across production, supply chain, and operations. Helps to define and analyze business requirements for Enterprise scale reports. Analyzes and evaluates business use cases for data engineering problems and helps design and develop processing solutions with ETL Cloud based technologies. How you will impact WestRock: Architect and implement scalable data pipelines using Palantir Foundry (pipelines, workshops, ontology) to unify and transform operational data. Design and develop robust data workflows using Python, Apache Airflow, and Apache Spark to support real-time and batch processing needs. Build and deploy solutions on cloud platforms (AWS or Azure), ensuring high availability, security, and performance. Collaborate with data scientists, analysts, and operations teams to deliver actionable insights and operational tooling. Define and enforce data engineering best practices, including CI/CD automation, version control (Git), and testing strategies. Mentor junior developers, conduct code reviews, and help shape the technical roadmap for the data platform. What you need to succeed: Education: Bachelor's degree in computer science or similar At least 6 years of strong Data Engineering experience Hands-on experience with Palantir Foundry, including pipelines, ontology modeling, and workshop development. Strong programming skills in Python or Java, with experience building and maintaining production-grade data pipelines. Proficiency in Apache Airflow and Apache Spark for workflow orchestration and large-scale data processing. Proven experience deploying data solutions on AWS or Azure, with strong understanding of cloud-native services. Familiarity with Git for version control and CI/CD pipelines for automated testing and deployment. Demonstrated ability to mentor junior engineers, lead projects, and work independently in a fast-paced environment. Good communication skills, with the ability to collaborate effectively across technical and non-technical teams. Good analytical and troubleshooting abilities. What we offer: Corporate culture based on integrity, respect, accountability and excellence Comprehensive training with numerous learning and development opportunities An attractive salary reflecting skills, competencies and potential A career with a global packaging company where Sustainability, Safety and Inclusion are business drivers and foundational elements of the daily work.
    $75k-100k yearly est. 4d ago
  • AWS Data Engineer (Only W2)

    Ampstek

    Data engineer job in Charlotte, NC

    Title: AWS Data Engineer Exprience: 10 years Must Have Skills: • Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services. • Experience in Snowflake and Data Build Tool. • Expertise in DBT, NodeJS and Python. • Expertise in Informatica, PowerBI , Database, Cognos. Nice to Have Skills: Detailed Job Description: • Strong experience in AWS services, primarily serverless, databases, storage services, container services, schedulers, and batch services. • Experience in Snowflake and Data Build Tool.Expertise in DBT, NodeJS and Python. • Expertise in Informatica, PowerBI , Database, Cognos. • Proven experience in leading teams across locations. • Knowledge of DevOps processes, Infrastructure as Code and their purpose. • Good understanding of data warehouses, their purpose, and implementation • Good communication skills. Kindly share the resume in ******************
    $77k-103k yearly est. 2d ago
  • Palantir Data Engineer

    Keylent Inc.

    Data engineer job in Charlotte, NC

    Build and maintain data pipelines and workflows in Palantir Foundry. Design, train, and deploy ML models for classification, optimization, and forecasting use cases. Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries. Create dashboards and data applications using Slate or Streamlit to enable operational decision-making. Implement generative AI use cases using large language models (GPT-4, Claude, etc)
    $77k-103k yearly est. 4d ago
  • Data Engineer

    Sharp Decisions 4.6company rating

    Data engineer job in Charlotte, NC

    C# Senior Developer RESOURCE TYPE: W2 Only Charlotte, NC - Hybrid Mid (5-7 Years) Role Description A leading Japanese bank is in the process of driving a Digital Transformation across its Americas Division as it continues to modernize technology, strengthen its data-driven approach, and support future growth. As part of this initiative, the firm is seeking an experienced Data Engineer to support the design and development of a strategic enterprise data platform supporting Capital Markets and affiliated securities businesses. This role will contribute to the development of a scalable, cloud-based data platform leveraging Azure technologies, supporting multiple business units across North America and global teams.Role Objectives Serve as a member of the Data Strategy team, supporting broker-dealer and swap-dealer entities across the Americas Division. Participate in the active development of the enterprise data platform, beginning with the establishment of reference data systems for securities and pricing data, and expanding into additional data domains. Collaborate closely with internal technology teams while adhering to established development standards and best practices. Support the implementation and expansion of the strategic data platform on the bank's Azure Cloud environment. Contribute technical expertise and solution design aligned with the overall Data Strategy roadmap. Qualifications and Skills Proven experience as a Data Engineer, with strong hands-on experience in Azure cloud environments. Experience implementing solutions using: Azure Cloud Services Azure Data Factory Azure Data Lake Gen2 Azure Databases Azure Data Fabric API Gateway management Azure Functions Strong experience with Azure Databricks. Advanced SQL skills across relational and NoSQL databases. Experience developing APIs using Python (FastAPI or similar frameworks). Familiarity with DevOps and CI/CD pipelines (Git, Jenkins, etc.). Strong understanding of ETL / ELT processes. Experience within financial services, including exposure to financial instruments, asset classes, and market data, is a strong plus.
    $78k-101k yearly est. 23h ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Data engineer job in McLean, VA

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 4d ago

Learn more about data engineer jobs

How much does a data engineer earn in Asheville, NC?

The average data engineer in Asheville, NC earns between $67,000 and $117,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Asheville, NC

$88,000
Job type you want
Full Time
Part Time
Internship
Temporary