Data Engineer- ETL/ELT - Hybrid/Remote
Remote hadoop developer job
Crown Equipment Corporation is a leading innovator in world-class forklift and material handling equipment and technology. As one of the world's largest lift truck manufacturers, we are committed to providing the customer with the safest, most efficient and ergonomic lift truck possible to lower their total cost of ownership.
Indefinite US Work Authorization Required.
Primary Responsibilities
Design, build and optimize scalable data pipelines and stores.
Clean, prepare and optimize data for consumption in applications and analytics platforms.
Participate in peer code reviews to uphold internal standards.
Ensure procedures are thoroughly tested before release.
Write unit tests and record test results.
Detect, define and debug programs whenever problems arise.
Provide training to users and knowledge transfer to support personnel and other staff members as required.
Prepare system and programming documentation in accordance with internal standards.
Interface with users to extract functional needs and determine requirements.
Conduct detailed systems analysis to define scope and objectives and design solutions.
Work with Business Analyst to help develop and write system requirements.
Establish project plans and schedules and monitor progress providing status reports as required.
Qualifications
Bachelor's degree in Computer Science, Software/Computer Engineering, Information Systems, or related field is required.
4+ years' experience in SQL, ETL, ELT and SAP Data is required.
Python, Databricks, Snowflakes experience preferred.
Strong written, verbal, analytical and interpersonal skills are necessary.
Remote Work: Crown offers hybrid remote work for this position. A reasonable commute is necessary as some onsite work is required. Relocation assistance is available.
Work Authorization:
Crown will only employ those who are legally authorized to work in the United States. This is not a position for which sponsorship will be provided. Individuals with temporary visas or who need sponsorship for work authorization now or in the future, are not eligible for hire.
No agency calls please.
Compensation and Benefits:
Crown offers an excellent wage and benefits package for full-time employees including Health/Dental/Vision/Prescription Drug Plan, Flexible Benefits Plan, 401K Retirement Savings Plan, Life and Disability Benefits, Paid Parental Leave, Paid Holidays, Paid Vacation, Tuition Reimbursement, and much more.
EOE Veterans/Disabilities
Senior Data Engineer.
Hadoop developer job in Columbus, OH
Immediate need for a talented Senior Data Engineer. This is a 06+ months contract opportunity with long-term potential and is located in Columbus, OH(Remote). Please review the job description below and contact me ASAP if you are interested.
Job ID: 25-95277
Pay Range: $70 - $71 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Key Requirements and Technology Experience:
Key skills; Snowflake, Python and AWS
Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
Experience with orchestration tools such as Prefect, DBT, or Airflow.
Experience automating data ingestion, processing, and reporting/monitoring.
Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Our client is a leading Insurance Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Senior Data Engineer(only W2)
Hadoop developer job in Columbus, OH
Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Java.
Proficiency with Azure data services, such as Azure Data Lake, Azure Data Factory and Databricks.
Expertise using Cloud Security (i.e., Active Directory, network security groups, and encryption services).
Proficient in Python for developing and maintaining data solutions.
Experience with optimizing or managing technology costs.
Ability to build and maintain a data architecture supporting both real-time and batch processing.
Ability to implement industry standard programming techniques by mastering advanced fundamental concepts, practices, and procedures, and having the ability to analyze and solve problems in existing systems.
Expertise with unit testing, integration testing and performance/stress testing.
Database management skills and understanding of legacy and contemporary data modeling and system architecture.
Demonstrated leadership skills, team spirit, and the ability to work cooperatively and creatively across an organization
Experience on teams leveraging Lean or Agile frameworks.
Data Engineer
Remote hadoop developer job
We are looking for a Data Engineer in Austin, TX (fully remote - MUST work CST hours).
Job Title: Data Engineer
Contract: 12 Months
Hourly Rate: $75- $82 per hour (only on W2)
Additional Notes:
Fully remote - MUST work CST hours
SQL, Python, DBT, Utilize geospatial data tools (PostGIS, ArcGIS/ArcPy, QGIS, GeoPandas, etc.) to optimize and normalize spatial data storage, run spatial queries and processes to power analysis and data products
Design, create, refine, and maintain data processes and pipelines used for modeling, analysis, and reporting using SQL (ideally Snowflake and PostgreSQL), Python and pipeline and transformation tools like Airflow and dbt
• Conduct detailed data research on internal and external geospatial data (POI, geocoding, map layers, geometrics shapes), identify changes over time and maintain geospatial data (shape files, polygons and metadata)
• Operationalize data products with detailed documentation, automated data quality checks and change alerts
• Support data access through various sharing platforms, including dashboard tools
• Troubleshoot failures in data processes, pipelines, and products
• Communicate and educate consumers on data access and usage, managing transparency in metric and logic definitions
• Collaborate with other data scientists, analysts, and engineers to build full-service data solutions
• Work with cross-functional business partners and vendors to acquire and transform raw data sources
• Provide frequent updates to the team on progress and status of planned work
About us:
Harvey Nash is a national, full-service talent management firm specializing in technology positions. Our company was founded with a mission to serve as the talent partner of choice for the information technology industry.
Our company vision has led us to incredible growth and success in a relatively short period of time and continues to guide us today. We are committed to operating with the highest possible standards of honesty, integrity, and a passionate commitment to our clients, consultants, and employees.
We are part of Nash Squared Group, a global professional services organization with over forty offices worldwide.
For more information, please visit us at ******************************
Harvey Nash will provide benefits please review: 2025 Benefits -- Corporate
Regards,
Dinesh Soma
Recruiting Lead
Senior Data Analytics Engineer
Hadoop developer job in Columbus, OH
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
Expert-level proficiency (10/10) in:
Python
PySpark
Strong working knowledge of:
SQL and other programming languages
Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
Experience with BI/Visualization tools such as Tableau
Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
Familiarity with machine learning workflows or MLOps frameworks.
Knowledge of metadata management, data governance, and data lineage tools.
Data Engineer
Hadoop developer job in Columbus, OH
We're seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions.
Key Responsibilities
Design, implement, and optimize data pipelines and workflows within Databricks.
Develop and maintain data models and SQL queries for efficient ETL processes.
Partner with cross-functional teams to define data requirements and deliver business-ready solutions.
Use version control systems to manage code and ensure collaborative development practices.
Validate and maintain data quality, accuracy, and integrity through testing and monitoring.
Required Skills
Proficiency in Python for data engineering and automation.
Strong, practical experience with Databricks and distributed data processing.
Advanced SQL skills for data manipulation and analysis.
Experience with Git or similar version control tools.
Strong analytical mindset and attention to detail.
Preferred Qualifications
Experience with cloud platforms (AWS, Azure, or GCP).
Familiarity with enterprise data lake architectures and best practices.
Excellent communication skills and the ability to work independently or in team environments.
Data Engineer
Hadoop developer job in Dublin, OH
The Data Engineer is a technical leader and hands-on developer responsible for designing, building, and optimizing data pipelines and infrastructure to support analytics and reporting. This role will serve as the lead developer on strategic data initiatives, ensuring scalable, high-performance solutions are delivered effectively and efficiently.
The ideal candidate is self-directed, thrives in a fast-paced project environment, and is comfortable making technical decisions and architectural recommendations. The ideal candidate has prior experience in modern data platforms, most notable Databricks and the “lakehouse” architecture. They will work closely with cross-functional teams, including business stakeholders, data analysts, and engineering teams, to develop data solutions that align with enterprise strategies and business goals.
Experience in the financial industry is a plus, particularly in designing secure and compliant data solutions.
Responsibilities:
Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.
Optimize data storage, retrieval, and processing for performance, security, and cost-efficiency.
Ensure data integrity and governance by implementing robust validation, monitoring, and compliance processes.
Consume and analyze data from the data pipeline to infer, predict and recommend actionable insight, which will inform operational and strategic decision making to produce better results.
Empower departments and internal consumers with metrics and business intelligence to operate and direct our business, better serving our end customers.
Determine technical and behavioral requirements, identify strategies as solutions, and section solutions based on resource constraints.
Work with the business, process owners, and IT team members to design solutions for data and advanced analytics solutions.
Perform data modeling and prepare data in databases for analysis and reporting through various analytics tools.
Play a technical specialist role in championing data as a corporate asset.
Provide technical expertise in collaborating with project and other IT teams, internal and external to the company.
Contribute to and maintain system data standards.
Research and recommend innovative, and where possible automated approaches for system data administration tasks. Identify approaches that leverage our resources and provide economies of scale.
Engineer system that balances and meets performance, scalability, recoverability (including backup design), maintainability, security, high availability requirements and objectives.
Skills:
Databricks and related - SQL, Python, PySpark, Delta Live Tables, Data pipelines, AWS S3 object storage, Parquet/Columnar file formats, AWS Glue.
Systems Analysis - The application of systems analysis techniques and procedures, including consulting with users, to determine hardware, software, platform, or system functional specifications.
Time Management - Managing one's own time and the time of others.
Active Listening - Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
Critical Thinking - Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
Active Learning - Understanding the implications of new information for both current and future problem-solving and decision-making.
Writing - Communicating effectively in writing as appropriate for the needs of the audience.
Speaking - Talking to others to convey information effectively.
Instructing - Teaching others how to do something.
Service Orientation - Actively looking for ways to help people.
Complex Problem Solving - Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
Troubleshooting - Determining causes of operating errors and deciding what to do about it.
Judgment and Decision Making - Considering the relative costs and benefits of potential actions to choose the most appropriate one.
Experience and Education:
High School Diploma (or GED or High School Equivalence Certificate).
Associate degree or equivalent training and certification.
5+ years of experience in data engineering including SQL, data warehousing, cloud-based data platforms.
Databricks experience.
2+ years Project Lead or Supervisory experience preferred.
Must be legally authorized to work in the United States. We are unable to sponsor or take over sponsorship at this time.
Data Engineer (Databricks)
Hadoop developer job in Columbus, OH
ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition.
Requirements:
Design, construct, install, test and maintain data management systems.
Build high-performance algorithms, predictive models, and prototypes.
Ensure that all systems meet the business/company requirements as well as industry practices.
Integrate up-and-coming data management and software engineering technologies into existing data structures.
Develop set processes for data mining, data modeling, and data production.
Create custom software components and analytics applications.
Research new uses for existing data.
Employ an array of technological languages and tools to connect systems together.
Recommend different ways to constantly improve data reliability and quality.
Qualifications:
5+ years data quality engineering
Experience with Cloud-based systems, preferably Azure
Databricks and SQL Server testing
Experience with ML tools and LLMs
Test automation frameworks
Python and SQL for data quality checks
Data profiling and anomaly detection
Documentation and quality metrics
Healthcare data validation experience preferred
Test automation and quality process development
Plus:
Azure Databricks
Azure Cognitive Services integration
Databricks Foundational model Integration
Claude API implementation a plus
Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
Senior SAP Developer - ETL / REMOTE
Remote hadoop developer job
Robinson Group has been retained to fill a newly created role in a newly created team- a Senior SAP Developer (ETL) - real REMOTE
Technically strong team that is using innovative approaches, the latest technology, and strong collaboration.
*This fully remote position will be part of a $17B organization but has the flexibility and mindset of a start up organization.
*Growing, smart, and fully supported team that will have you leading the integration of SAP data primarily from SAP ECC and SAP S/4 HANA-into a unified, cloud-based Enterprise Data Platform (EDP).
This role needs deep expertise in SAP data structures, combined with strong experience in enterprise ETL development using cloud-native technologies.
As a Senior SAP Developer (ETL), you will play a key role in designing and implementing scalable data pipelines that extract, transform, and harmonize data from SAP systems into canonical models for analytics, reporting, and machine learning use cases.
You will partner closely with data engineers, architects, and SAP subject matter experts to ensure accuracy, performance, and alignment with business requirements.
This role will support a variety of high-impact projects focused on enabling cross-ERP visibility, operational efficiency, and data-driven decision-making across finance, manufacturing, and supply chain functions.
Your contributions will help standardize critical datasets and accelerate the delivery of insights across the organization.
Your skillset:
Strong experience in SAP ECC and SAP HANA
SAP Datasphere (building ETL pipelines)
Architect and implement ETL pipelines to extract data from SAP ECC / HANA / Datasphere
Design and build robust, scalable ETL/ELT pipelines to ingest data into Microsoft cloud using tools such as Azure Data Factory, or Alteryx.
Analyze/interpret SAP's internal data models while working also closely with both SAP functional and technical teams
Lead the end to end data integration process for SAP ECC
Leverage knowledge of HANA DW to support reporting and semantic modeling
Strong communication capabilities as it relates to interfacing with supply chain and finance business leaders
Strong cloud knowledge (Azure is preferable, GCP, AWS, Fabric)
Ability to model data/ modeling skills
Expose/experience with Python (building data transformations in SQL and Python)
Your background:
Bachelor's degree in Computer Science, Data Science, Information Systems, or a related field.
10 years of IT experience, with 8 years of SAP experience (SAP ECC and SAP S/4HANA).
Hands-on experience with Azure cloud data services including Synapse Analytics, Data Lake Storage, SQL DB.
Experience building cloud-native applications, for example with Microsoft Azure, AWS or GCP
SharePoint Solution Developer
Hadoop developer job in Columbus, OH
Job Title: Sr. SharePoint Solution Developer
Client: Aerospace domain
Visa : USC, GC Only
Exp level: 13+ years
Pay rate: $80/hr on C2C(depends on the exp)
No of Openings: 2
Top Skills:
- SharePoint 2019
- .NET
Primary Duties and Responsibilities
Migrate SharePoint Server-side solutions from SharePoint 2007 to SharePoint 2016
Troubleshoot and fix SharePoint OOB and custom application issues; provide root cause analysis in a timely manner
Create and maintain SharePoint sites, work with contents including site and site collection features, list, libraries, permissions and other SharePoint components.
Execute product specification, system design, development, and system integration
Participate in product and program collaboration
Refactor SharePoint server-side applications and services to latest SharePoint platforms
Maintain, configure, and improve SharePoint solutions and artifacts post migration
Complete other tasks as required
Experience, Education and Skills
5+ years of SharePoint server-side solution development experience using SharePoint 2007 through SharePoint 2016
8+ years in any software development role
Extensive knowledge of C#, .Net framework and ASP.Net
Extensive knowledge of Microsoft Internet Information Services (IIS)
Extensive knowledge of Site templates, SharePoint custom and OOB master pages and page layouts
Extensive knowledge of SharePoint server artifacts and services
Extensive knowledge of Microsoft SQL Server including SQL queries and other SQL Components, Performance troubleshooting and fixing performance issues
Strong knowledge of InfoPath forms development with code behind and migration
Strong knowledge of various authentication methods and Kerberos
Experience using third-party migration tools such as Sharegate is a plus
Strong knowledge of object-oriented programming
Strong Web Development: HTML5, CSS 3 and JavaScript libraries
Strong knowledge of web service models: SOAP, OData, REST
Experience in Client-side debugging, ULS log analysis and Network trace analysis
Experience developing client-side solutions using SharePoint Framework is a plus
Experience with TFS and Git
General Requirements
Exhibit and practice courteous, ethical and professional behavior while interacting with both internal and external customers
Act in a collaborative, team-oriented environment focused on common goals to achieve mutually beneficial results
Be accountable and responsible for the accuracy and completeness of assigned work and results
Prioritize and manage workload and communicate issues clearly
Exhibit effective verbal and written communication skills
Comply with all laws, regulations and company policies
ETL Informatica IICS Developer-12 Months Contract -Remote opportunity-Direct Customer.
Remote hadoop developer job
Greetings from Accion Labs,
Our direct Client is looking for ETL Informatica IICS Developer-12 Months Contract -Remote opportunity-Direct Customer.
Primary skills :Data Engineering & ETL/ELT ,ODI or Informatica Cloud (IICS) ,SQL / PL-SQL, Informatica IICS
Job Description:
The ETL engineer will install, test, and maintain ETL jobs and processes,
•5 years' experience on IICS Development and support
•Troubleshoot and resolve production issues and provide high-level support on system software
•Part of the production support team spanning multiple time zones and geographies
•Coordinate with internal IT teams to analyze and resolve production process failures
•Prepare and execute processes to correct data discrepancies in reporting tables
•Provide 24X7 on-call support on a rotation basis
•Ensure all service level objectives are achieved or exceeded
•Join conference calls with other IT departments to support recovery from outages
•Perform release management and post-implementation tasks for software releases to production environments
•Respond to business user requests regarding data issues and outages
•Provide feedback to Application Development teams regarding opportunities to make code more reliable, faster, and easier to maintain
•Provide technical analysis to help debug issues, perform root cause analysis and eliminate repeated incidents
•Collaborate with team members to resolve complex issues to assure the successful delivery of IT solutions
•Automate manual repeatable tasks Develop and maintain documentation, technical procedures, and user guides
Education:
Bachelor s degree in computer science, information Systems, or related discipline.
This role is open to W2 or those seeking Corp-Corp employment.
The salary range for this role is 90-100 k/annum or Corp-Corp rates please contact the recruiter.
In addition to other benefits, Accion Labs offers a comprehensive benefits package, with Accion covering 65% of the medical, dental, and Vision Premiums for employees, their spouses, and dependent children enrolling in the Accion-provided plans.
Android Developer
Hadoop developer job in Columbus, OH
Hello,
My name is Pradeep Bhondve, and I work as a Technical Recruiter for K-Tek Resourcing.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
Please see the and if you feel Interested then send me your updated resume at ********************************** or give me a call at
*************
.
Linkedin Profile: ******************************************************
Job Title: Android Developer
Location: Columbus, OH - Relocation will work from EST and CST Time Zone
Duration: Long Term
Job Description -
Skill Set: Kotlin, Android SDK, Jetpack Compose, Dagger, Coroutines, Junit, MVVM
Entry-Level Android Developer- Onsite (35807)
Hadoop developer job in Columbus, OH
An international software development company in the Columbus, Ohio area is now looking for Entry-Level Android Developer to join their office. This is direct-hire, full-time, and in-person role.
Entry-Level Android Developer Responsibilities Include:
Design and develop application for the Android platform
Evaluate and analyze the systems to determine weaknesses and issues, potential improvements, application updates, etc.
Provide training and assistance to other company personnel on system functions
Support for technical issues on the end user level, and documentation of actions taken
Prepare system function documentation and instruction materials
Plan and implement system updates, as well as department projects and upgrades
Other duties as assigned
Entry-Level Android Developer Responsibilities Include:
Bachelor's degree in Computer Science, Computer Engineering, or related field
Minimum 1-3 years of experience in Android application development or In-Vehicle-Infotainment (IVI) Systems is a plus
Strong communication skills and customer service / client relations ability
Flexibility and willingness to collaborate in a multicultural environment
Organizational and time management skills
This position is in an office work environment. While performing the duties of this job you may be required to intermittently sit, stand, walk, lift up to 25 pounds, lift in excess of 25 pounds with a lift assist, climb stairs, use hands to handle or feel parts/equipment, reach with hands and arms, stoop, kneel, crouch, bend at the waist, talk and hear. You may also be required to use close vision, distance vision, color vision, depth perception, and peripheral vision.
===============================================================
Activ8 Recruitment & Solutions / Renaissance Resources Inc. has been a trusted leader in North American recruiting for Japanese businesses for over 25 years. We specialize in connecting top talent with companies in the Automotive, Electronics, Food & Beverage, Logistics, Manufacturing, Oil & Gas, Banking & Finance, and Entertainment industries. Our client-focused approach ensures that we understand your unique needs, whether you're a company seeking skilled professionals or a candidate looking for the right career opportunity. By working closely with each individual, we provide tailored solutions that drive success.
We screen ALL Candidates to verify the validity of each applicant's provided information. Upon submitting your resume, we will contact only those candidates that we deem qualified for our client. If we do not contact you, we do not see the fit for the position. If we are unable to reach you in a reasonable timeframe, you will be eliminated from the pool of potential candidates.
We prioritize direct applicants; third-party resumes may not be reviewed.
Senior HL7 Interface Developer (Remote)
Remote hadoop developer job
A large health services network is actively seeking a new HL7 Interface Developer to join their staff in a Senior-level Remote opportunity.
About the Opportunity:
Schedule: Monday to Friday
Hours: 8am to 5pm (EST)
Responsibilities:
Design, develop, test, and deploy HL7 interfaces using integration engines (e.g., Cloverleaf, InterSystems Ensemble/IRIS).
Translate functional requirements into technical specifications for interface development.
Build and maintain real-time, batch, and API-based integrations adhering to HL7 standards (v2.x, v3, FHIR).
Develop robust workflows for ADT, ORM, ORU, SIU, MDM, DFT, and other standard HL7 message types.
Perform other duties, as needed
Qualifications:
2+ years of experience developing HL7 interfaces within a healthcare environment
Must be Epic Bridges Certified
Experience developing interfaces in IRIS (formally known as Ensemble) Intersystem integration engine
Experience in data conversion, converting historical clinical data into Epic
Ability to build interfaces between Epic and third-party applications
Mobile Application Developer - iOS
Hadoop developer job in Columbus, OH
We are looking for a Mobile Application Developer to work for our client. The ideal candidate aligns with the responsibilities and qualifications outlined below.
Responsibilities:
Design, develop, and maintain iOS applications using Swift
Work across the entire app lifecycle: design, build, test, deploy, release, and post-launch support
Architect and build high-performance, scalable iOS applications
Collaborate with cross-functional teams to deliver seamless user experiences
Ensure code quality through testing and best practices
Qualifications:
4+ years of experience developing iOS applications using Swift
Proven experience across all phases of the app lifecycle
Strong background in architecting and building high-performance iOS applications
Solid understanding of Apple's design principles and interface guidelines
Excellent problem-solving and communication skills
What Our Client Offers:
A creative environment focused on innovation and user experience
Opportunities to work on cutting-edge mobile projects impacting thousands of users
Access to modern tools and technologies for mobile development
Competitive compensation and comprehensive benefits
Application Developer
Hadoop developer job in Newark, OH
Manifest Solutions is currently seeking an Application Developer for a hybrid position in Newark, OH.
- MUST be within commuting distance of Newark, OH.
NO C2C; NO 3rd Parties - Applicants ONLY
Decommission and move apps to ServiceNow
Re-writing apps in .Net
Designs, codes, tests, documents, releases, and supports custom software applications to meet specific technology needs.
Gather and analyze functional business/user requirements.
Define the technical requirements for the custom software by analyzing systems and processes-including their dependencies and interactions-in the context of the business' technology need(s).
Prepare Scope of Work that describe in detail the components that will be developed and the methods that will be used - including written requirements and time estimates; entity relationship, user flow, and data flow diagrams; permissions and roles matrices; and other applicable design artifacts.
Create prototypes that enable business users to verify that functionality will meet the specific business technology need(s).
Develop software solution(s) in accordance with the business and technical requirements by writing and implementing source code.
Test and debug source code.
Develop and maintain technical documentation that represents the current state design and code of custom software applications.
Develop and maintain user guides that describe the features and functionality of custom software applications.
Periodically evaluate existing custom software applications to assess code quality and functional integrity over time.
Update existing custom software applications as necessary to fix errors, adapt to new hardware, improve performance, refactor code, enhance interfaces and/or implement new features/functionality.
Qualifications
2 -4 yrs - Applied specialized experience with the following: Angular 4+, .NET, C# 7+, SQL, VS Code/Visual Studio, GitLab, PostMan, API Design and Development
Formal education in software development, web development, or similar focus or equivalent professional experience
Familiar with and able to adhere to a formal SDLC program
Understands and can implement Secure Coding Practices (e.g. OWASP Top 10)
Experience doing containerized development
Cloud Development experience (Azure, AWS)
Familiarity with the following types of testing: User Testing, Integration Testing, Isolation Testing, Load Testing, End to End Testing, Vulnerability Testing
Database developer Remote
Remote hadoop developer job
Database developer to support front end systems (as needed by developers across the organization, in support of web services, third party, or internal development needs) to the exclusion of reporting needs by other departments. Developed code includes but is not limited to PL/SQL in the form of Triggers, Procedures, Functions, & Materialized Views. Generates custom driven applications for intra-department use for business users in a rapid application development platform (primarily APEX). Responsible for functional testing and deployment of code through the development life cycle. Works with end-users to obtain business requirements. Responsible for developing, testing, improving, and maintaining new and existing processes to help users retrieve data effectively. Collaborates with administrators and business users to provide technical support and identify new requirements.
Responsibilities
Responsibilities:
Design stable, reliable and effective database processes.
Solve database usage issues and malfunctions.
Gather user requirements and identify new features.
Provide data management support to users.
Ensure all database programs meet company and performance requirements.
Research and suggest new database products, services, and protocols.
Requirements and skills
In-depth understanding of data management (e.g. permissions, security, and monitoring)
Excellent analytical and organization skills
An ability to understand front-end user requirements and a problem-solving attitude
Excellent verbal and written communication skills
Assumes responsibility for related duties as required or assigned.
Stays informed regarding current computer technologies and relational database management systems with related business trends and developments.
Consults with respective IT management in analyzing business functions and management needs and seeks new and more effective solutions. Seeks out new systems and software that reduces processing time and/or provides better information availability and decision-making capability.
Job Type: Full-time
Pay: From $115,000- 128,000 yearly
Expected hours: 40 per week
Benefits:
Dental insurance
Health insurance
Paid time off
Vision insurance
Paid time off (PTO)
Various health insurance options & wellness plans
Required Knowledge
Considerable knowledge of on-line and design of computer applications.
Require Experience
One to three years of database development/administration experience.
Skills/Abilities
Strong creative and analytical thinking skills.
Well organized with strong project management skills.
Good interpersonal and supervisory abilities.
Ability to train and provide aid others.
Database Developer
Remote hadoop developer job
Healthcare Management Solutions, LLC (HMS) has an opening for a Database Developer to work remote. The Database Developer must have a robust knowledge of Information Technology (IT) and database development processes to identify application requirements, perform data mapping, and have a deep knowledge of ASPEN datasets. The individual must be able to work closely with the customer to identify business needs and understand usage scenarios for implementation within applications.
Responsibilities:
Provide Subject Matter Expertise as it relates to mapping legacy ASPEN data to the new iQIES platform
Lead and/or assist in ongoing reviews of business/data processes and customer usage scenarios
Perform data migration analysis and prioritize requirements based upon business needs
Conduct meeting and presentations to share findings and/or possible solutions
Perform data mapping and data migration
Participate in root cause analysis to recommend enhancements or other appropriate actions to improve usability and/or functionality
Ability to effectively communicate closely with clients, technical team members, and management
Database Developer 1 (Remote)
Remote hadoop developer job
Prepares, defines, structures, develops, implements, and maintains database objects. Analyze query performance, identify bottlenecks, and implement optimization techniques. Defines and implements interfaces to ensure that various applications and user-installed or vendor-developed systems interact with the required database systems.
Creates database structures, writing and testing SQL queries, and optimizing database performance.
Plans and develops test data to validate new or modified database applications.
Work with business analysts, and other stakeholders to understand requirements and integrate database solutions.
Build and implement database systems that meet specific business requirements ensuring data integrity and security, as well as troubleshooting and resolving database issues.
Design and implement ETL pipelines to integrate data from various sources using SSIS.
Responsible for various SQL jobs.
Skills Required
Strong understanding of SQL and DBMS like MySQL, PostgreSQL, or Oracle.
Ability to design and model relational databases effectively.
Skills in writing and optimizing SQL queries for performance.
Ability to troubleshoot and resolve database-related issues.
Ability to communicate technical information clearly and concisely to both technical and non-technical audiences.
Ability to collaborate effectively with other developers and stakeholders.
Strong ETL experience specifically with SSIS.
Skills Preferred
Azure experience is a plus
.Net experience is a plus
GITHub experience is a plus
Experience Required
2 years of progressively responsible programming experience or an equivalent combination of training and experience.
Education Required
Bachelor`s degree in Information Technology or Computer Science or equivalent experience
Senior Auto Technician - ASE CERTIFIED - Westerville - Schrock Rd
Hadoop developer job in Westerville, OH
Boyds Tire & Service has been providing drivers in Central Ohio with the best automotive products and services since 1996. We strive to provide you with high-quality tires and reliable car repairs at our several locations in Columbus, Blacklick, Hilliard, Lewis Center, Marysville, and Central Ohio. Our staff is ready to go above and beyond to help you meet your needs, to get you back on the road, satisfied.
The
Automotive Technician
is responsible for effectively and efficiently diagnosing and repairing customer vehicles while adhering to the MAP guidelines and in accordance with dealership, manufacturers and Boyds Tire standards.
COMPENSATION: Pay ranges from $25- $40 per hour depending on experience (hourly plus flag rate).
Principal Duties and Responsibilities:
Diagnoses vehicles according to the appropriate level of certifications/experience.
Performs work as outlined on the Multi-point Inspection and/or Repair Order with efficiency and accuracy.
Explains technical diagnosis and needed repairs to non-mechanical individuals which may include the Store Manager, Service Consultants and/or customers.
Recommends services that are necessary to keep the customers vehicle in running condition; properly documents all recommendations in customer file.
Follows all safety procedures and reports any concerns to the Shop Foreman or Store Manager.
Maintains appropriate ASE certifications and renewals of expiring certifications.
Automotive Technician Benefits:
Competitive Bi-Weekly Pay
Tuition Reimbursement
Paid Vacation and Sick Time
6 Paid Holidays
Medical, Dental and Vision Insurance
Life Insurance (Company paid)
401(k) Retirement Savings Plan with Company Match
Discounted Services on Personal and Immediate Family Vehicles
Opportunity for Advancement!
Qualifications:
Prefer a minimum of one unexpired ASE or equivalent experience or training (3+ years of senior level experience).
Possess valid drivers license
Must be at least 18 years of age
Ability to work a minimum of five days, including Saturdays.
Sun Auto Tire & Service provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
RequiredPreferredJob Industries
Automotive