Post job

Requirements Engineer jobs at Cipher

- 2493 jobs
  • Android Mobile Reverse Engineer: All Levels (Requires Applicants Currently Hold TS Level Clearance)

    Cipher Tech Solutions 4.3company rating

    Requirements engineer job at Cipher

    Reverse engineering Android malware, defeating obfuscation techniques and collecting evidence of malicious behavior. Provide support in the form of scripting to enable better application triage and value-added reverse engineering expertise. Integrate existing scripts into our analysis pipelines and develop new scripts in response to customer specifications. Reverse Engineers will be working primarily with decompiled Java but will occasionally encounter Java/Smali bytecode and native libraries that require familiarity with the ARM assembly language. Regardless of whether you are applying to a software developer or reverse engineer position, if your application is approved, we will test you on both skills as part of our hiring process. At this time, the majority of our open positions require an aptitude in both domains. Core Position Requirements: US Citizenship required (no exceptions) Applicants MUST hold a TS (Top Secret) Level US Government Security Clearance at time of application. Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, Cyber Security, or Math **A minor in one of the above degree types is also acceptable if degree major is a hard STEM discipline. Technical Requirements: Must possess strong software development skills Must be able to read and/or write x86, x64, and/or ARM Assembly code Able to create and debug scripts that enable reverse engineering Familiar with maintainable software processes, including: software documentation, source code versioning, unit testing Able to convey technical topics verbally, in source code comments, and documentation. Ability to work independently and as part of a team in a fast-paced environment with ever-changing requirements Possess one of the following Experience developing in Java and an interest in learning about Java bytecode and ARM assembly Experience reverse engineering binaries with Ghidra or IDA Pro Desirable: Prior Android development experience. Experience reverse engineering mobile applications Familiarity with APK decompilation tools such as JEB, JADX or Dex2Jar Experience reverse engineering native ARM binaries with IDA Pro or Ghidra Familiarity with the ARM assembly language Familiar with hex editors and working at the byte-level Platform specific mobile development language experience using Kotlin Key Qualities: Self-motivated, detail-oriented, strong work ethic Able to quickly adapt to new situations and problems Able to operate in a mission-critical and time-sensitive environments Able to work in a team environment Benefits: Mission focused work environment Subsidized medical/dental/vision insurance HSA contributions 401k matching Holiday and Paid Time Off (PTO) Monthly WAN parties & quarterly events Flexible hours Remote work is not an option. We do offer relocation assistance. Please see our careers page or inquire with your recruiter about the terms and conditions. Salary and other Compensation: The hourly rate of pay ranges from $43.27 - $96.15 The salary range for this position is subject not only to the hire's skills, education, expertise, and years of experience, but also to features of the final position offered to the hire: including, but not limited to the location of the position, clearance required for the position, and the contract associated with the position. See our Perks & Benefits page for a general description of benefits and other potential compensation benefits: *********************************************************** About Us: At Cipher Tech Solutions, Inc. (Cipher Tech), we take great pride in supporting some of the most mission critical work in the Department of Defense and the Intelligence Community. The majority of our company is comprised of extremely talented software developers who write production tools which directly support one of two missions: digital forensics and malware reverse engineering. Digital forensics is the practice of extracting and processing evidence from computers, phones, tablets, cameras, flash-drives, and just about any other form of digital media. The results of our work are then used to support federal and military law enforcement, counterintelligence, and counter terrorist activities. Malware reverse engineering is the practice of analyzing malicious code. The goal of a reverse engineer is to understand how the malicious code works and determine facts such as: *How did it infect the victim's computer *What kind of information it is seeking to steal *How is it communicating with the author (actor) *Who that actor might be. Vaccination Notice: Please be aware that you are applying for a position to work as a federal contractor. As such, Cipher Tech will require, and your employment will be subject to, such vaccinations as are required by federal, state, and local law requirements. Important Security Clearance Information: Be aware that you are applying for a job that requires a U.S. Government Security Clearance. The U.S. government will not issue security clearances to recent users of illegal drugs. On your application to us, you must acknowledge that you have not engaged in any illegal drug use within the past twelve months (illegal drug use includes, but is not limited to, the use of illegal substances, the misuse of prescriptions, the misuse of over-the-counter substances, and the use of medical or recreational marijuana that may be deemed legal at the state level but is still considered illegal by the federal government). NOTE: CIPHER TECH WILL NOT DISCRIMINATE AGAINST ANY QUALIFIED APPLICANTS WITH A DISABILITY, INCLUDING ILLEGAL DRUG USERS WHO HAVE RECENTLY SOUGHT PROFESSIONAL TREATMENT FOR A DRUG PROBLEM. -- Cipher Tech is an Equal Opportunity Employer. All hiring and employment decisions are made on the basis of business requirements, performance, and merit with no regard to race, color, religion, sexual orientation, gender, gender identity, national origin, or other protected status. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or national origin. Job Posted by ApplicantPro
    $43.3-96.2 hourly 20d ago
  • Applied AI Interface Engineer

    Mantech 4.5company rating

    Alexandria, VA jobs

    MANTECH seeks a motivated, career and customer-oriented Applied AI Interface Engineer to join our team in Alexandria, VA. As part of the position, you will act as a Software Engineer designing and implementing services and components for AI applications. Responsibilities include but are not limited to: Designs and builds User Interfaces using modern UX/UI standards. Develops, implements, and maintains full-stack software solutions for AI-enabled applications. Works closely with the Software Architect to understand project requirements and translate them into technical specifications. Develops and integrates AI and ML capabilities on a cloud-hosted data platform that supports significant market adoption, high performance, and strict access control and governance. Stays current with advancements in AI, machine learning, and software engineering, incorporating best practices into the development process. Documents software designs, code, and processes to ensure maintainability, scalability, and knowledge sharing among team members. Participates in code reviews and provides constructive feedback to peers to ensure code quality, adherence to coding standards, and knowledge transfer within the team. Minimum Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 4 or more years (2 with Master's) of experience in software development, systems integration, data management, or related fields. Proficiency in JavaScript, including familiarity with modern frameworks and libraries such as React, Angular, or Vue.js. Solid knowledge of HTML and CSS, including responsive design principles and front-end workflows. Knowledge of Python and REST API frameworks. Basic understanding of user interface (UI) and user experience (UX) design principles, with the ability to collaborate with designers to translate wireframes into functional code. Problem-Solving Skills: Strong analytical and problem-solving abilities, with the capacity to debug and resolve issues related to front-end code. Experience with Generative AI including API access to large language models (LLMs). Preferred Qualifications: Experience with Docker, Kubernetes, or other containerization technology. Experience working in AWS environments. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a team-oriented environment. Experience working with data ingest and transformation. Clearance Requirements: Must possess a current and active TS/SCI clearance Physical Requirements: The person in this position must be able to remain in a stationary position 50% of the time. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
    $63k-79k yearly est. 1d ago
  • Systems Engineer

    Mantech 4.5company rating

    Annapolis, MD jobs

    MANTECH seeks a motivated, career and customer-oriented Systems Engineer III to join our team in Annapolis Junction, Maryland. ManTech is currently seeking Systems Engineer III to join our team at Annapolis Junction, MD. We are seeking a Senior Systems Engineer who is a seasoned IT professional responsible for designing, implementing, and a top-level trouble shooter supporting a high-level architecture of an organization's backend infrastructure. You will communicate with other project personnel, Government stakeholders, and senior executives. Responsibilities include but are not limited to: Administer and maintain IT infrastructure by utilizing expert-level knowledge of the Linux operating system, along with deep expertise in server administration, virtualization, and cloud platforms like AWS or Azure, to manage the organization's backend systems. Develop and implement automation solutions through creating scalable automation services using strong scripting and orchestration skills, with proficiency in tools like Python, Bash, and Ansible. Design and manage system architecture by applying systems engineering principles to design, implement, and manage the high-level architecture of the organization's backend infrastructure. Ensure network and system security through a strong understanding of networking and security fundamentals, including TCP/IP, firewalls, and VPNs, to protect the organization's infrastructure. Communicate with stakeholders - Interact directly with government personnel, senior executives, and other project members to discuss technical issues and project status. Provide technical guidance and troubleshooting as a top-level troubleshooter, addressing technical considerations, problems, and issues for both internal teams and government stakeholders. Minimum Qualifications: Bachelor's degree in STEM field or Highschool and 13 years of experience; Associates and 11 years of experience in lieu of degree, a candidate holding a relevant master's degree may be granted 2 years' experience credit for that extra education, a candidate holding a relevant PhD degree may be granted 4 years' experience credit for that extra education. 5 years of position-relevant work experience. DoD 8570.01-m IAT Level II. Strong proficiency in scripting and automation, with hands-on experience in Python, Bash, and an infrastructure orchestration tool like Ansible. Experience in front-end web development to create internal tools and solutions. Proven background as an Orchestration Engineer or in a similar role focused on building scalable automation services. Solid grasp of TCP/IP networking, firewalls, VLANs, and VPNs. Expert-level knowledge of the Linux operating system. Preferred Qualifications: Experience with full-stack development, including creating and interacting with databases and APIs. Knowledge of Infrastructure as Code (IaC) principles, preferably with experience using Terraform. Familiarity with public cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes). Proficiency with virtualization platforms such as VMware vSphere or Microsoft Hyper-V. Exceptional troubleshooting and problem-solving skills, with a talent for systematically diagnosing and resolving complex technical issues. A foundational understanding of traditional Systems Administration principles Clearance Requirements: Must have a current/active TS/SCI Physical Requirements: The person in this position must have the ability to perform tasks while stationary 50% of the time at a computer, with specific emphasis on visual acuity for close-range work. Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations
    $81k-106k yearly est. 3d ago
  • Ping Identity (IDM) / ForgeRock Identity Engineer

    Matlen Silver 3.7company rating

    Charlotte, NC jobs

    Job Title: Ping Identity (IDM) / ForgeRock Identity Engineer Duration: 12 months Other Approved Locations: Pennington, NJ Pay Scale: $75-80/hr W2 (Cannot subcontract or C2C) Job Description: Matlen Silver has partnered with our leading financial services client to recruit a Ping/ForgeRock Identity Management (IDM) Engineer to provide engineering and build responsibilities for the Identity and Authentication team. The Ping/ForgeRock Engineer is responsible for designing, implementing, and maintaining Identity and Access Management (IAM) solutions using the Ping Identity (formerly known as ForgeRock) Platform. This role is critical in securing digital identities, enabling seamless access, and ensuring compliance across enterprise systems. Key Responsibilities: Design Implementation: Develop and customize Ping Identity/ForgeRock components (AM, IDM, DS, IG) using Java and Groovy. Integrate Ping Identity/ForgeRock IAM solutions with enterprise applications and infrastructure. Implement identity flows including Single Sign-On (SSO), Multi-Factor Authentication (MFA), and access governance. Development Integration: Build RESTful APIs for identity services and external integrations. Create attribute mapping logic across systems (LDAP, RDBMS, HR systems). Automate software upgrades and patches using CI/CD pipelines. Security Performance: Conduct performance tuning and capacity planning. Troubleshoot and resolve IAM-related issues including synchronization and integration. Ensure high availability and disaster recovery for IAM services. Collaboration Documentation: Work with DevOps, security, QA, and business teams to support deployments and testing. Document IAM architectures, workflows, and technical specifications. Required Skills Qualifications: 5+ years of experience with Ping Identity/ForgeRock IAM solutions (AM, IDM, DS, IG). Strong Java and Groovy scripting development skills. Experience with REST APIs, LDAP, OAuth2, OpenID Connect, and SAML. Familiarity with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with CI/CD tools and DevOps practices. Excellent problem-solving and debugging skills. Preferred Experience: Prior work in financial services or regulated environments. Experience with identity governance and compliance frameworks. Knowledge of scripting languages and automation tools (Ansible, Terraform). Experience with monitoring tools (Splunk, Dynatrace, and Prometheus) About Matlen Silver Experience Matters. Let your experience be driven by our experience. For more than 40 years, Matlen Silver has delivered solutions for complex talent and technology needs to Fortune 500 companies and industry leaders. Led by hard work, honesty, and a trusted team of experts, we can say that Matlen Silver technology has created a solutions experience and legacy of success that is the difference in the way the world works. Matlen Silver is an Equal Opportunity Employer and considers all applicants for all positions without regard to race, color, religion, gender, national origin, age, sexual orientation, veteran status, the presence of a non-job-related medical condition or disability, or any other legally protected status. If you are a person with a disability needing assistance with the application or at any point in the hiring process, please contact us at email and/or phone at: ********************* // ************
    $75-80 hourly 1d ago
  • F5 Engineer

    Matlen Silver 3.7company rating

    Charlotte, NC jobs

    Job Title: F5 Engineer Duration: 18 Month W2 Contract Required Pay Scale: $60-$70/hour Face to Face INTERVIEW REQUIRED; ONE AND DONE! F5 Engineers Senior Load balancer engineer required to build complex solutions for the client's applications Responsibilities include: - Perform various load balancer changes in accordance with project requirements. - This includes building new configurations, such as new VIPs, and or WideIP setups - Modifying existing setups and troubleshooting application traffic issues flowing through load balancers Main points of contact: - Project managers - Peers on his team for design peer review - Escalations to team lead and manage when necessary Expert level on F5 Load balancing LTM GTM Expert on F5 iRules control traffic flows and manipulate headers Strong understanding of TCP/IP network communications Need good experience with F5 (GTM, LTM, SSL offloading) -L2/L3 support experience -Good routing and switching experience -Strong DNS experience -Need Load Balancing SME ; Good experience with load balancing options and features including OneConnect, Persistence, SSL and HTTP. Also need experience with C -Good experience with iRules and/or iControls -Experience with Capacity/threshold management and workload management Senior Load balancer engineer required to build complex solutions for Bank of America's applications Expert level on F5 Lad balancing LTM GTM, Python needed Expert on F5 iRules control traffic flows and manipulate headers Strong understanding of TCP/IP network communications Understanding of programming skills including Python, JAVA, TCL, Javascript, JQuery, Perl Strong communication skills to work with project managers and solution architects on deliverables About Matlen Silver Experience Matters. Let your experience be driven by our experience. For more than 40 years, Matlen Silver has delivered solutions for complex talent and technology needs to Fortune 500 companies and industry leaders. Led by hard work, honesty, and a trusted team of experts, we can say that Matlen Silver technology has created a solutions experience and legacy of success that is the difference in the way the world works. Matlen Silver is an Equal Opportunity Employer and considers all applicants for all positions without regard to race, color, religion, gender, national origin, age, sexual orientation, veteran status, the presence of a non-job-related medical condition or disability, or any other legally protected status. If you are a person with a disability needing assistance with the application or at any point in the hiring process, please contact us at email and/or phone at: ********************* // ************ At The Matlen Silver Group, Inc., W2 employees are eligible for the following benefits: Health, vision, and dental insurance (single and family coverage) 401(k) plan (employee contributions only)
    $60-70 hourly 3d ago
  • Jenkins Platform Engineer

    Perennial Resources International 4.1company rating

    Charlotte, NC jobs

    Full-time Charlotte, NC Qualifications You have operational experience with one or more of the following cicd tools - Jenkins, Octopus Deploy. 3+ years of experience is managing, maintaining and scaling a devops environment using Jenkins for large number of deployments. Experience with configuration management tools (Ansible, Puppet/Chef) Experience of Automating Infrastructure deployment Demonstrate hands on working knowledge with AWS and its services, including but not limited to EC2, S3, IAM, RDS, Python and Aurora Experience in Software-Defined-Networks, Software-Defined-Data-Centers, Virtual Private Clouds and other core network technology Experienced with DevOps and GitOps Experience with multiple cloud platforms (GCP or Azure) a plus Proactive, enthusiastic and ambitious Good team worker with excellent verbal and written communication skills
    $66k-93k yearly est. 1d ago
  • Platform Engineer

    Intraedge 3.9company rating

    Charlotte, NC jobs

    We are seeking an experienced and highly skilled Platform Lead to join our technology team. The ideal candidate will have a strong background in cloud infrastructure, automation, and platform engineering, with deep expertise in AWS services (EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, Secrets Manager), Terraform, Git, and Python. As Platform Lead, you will architect, implement, and manage scalable, secure, and reliable cloud platforms to support our business objectives. Key Responsibilities: Lead the design, implementation, and management of cloud infrastructure using AWS services such as EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, and Secrets Manager. Develop and maintain infrastructure as code solutions using Terraform. Build automation scripts and tools in Python to streamline platform operations and deployments. Oversee platform reliability, scalability, and security, ensuring best practices are followed. Collaborate with software engineering, DevOps, and security teams to deliver robust cloud solutions. Manage CI/CD pipelines and version control using Git. Monitor, troubleshoot, and resolve platform issues, ensuring high availability and performance. Document architecture, processes, and procedures for platform operations. Mentor and guide team members, fostering a culture of technical excellence and continuous improvement. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 7+ years of professional experience in cloud platform engineering and development. Extensive hands-on experience with AWS services: EC2, RDS, S3, Lambda, Load Balancer, EKS, Encryption, and Secrets Manager. Proficiency in infrastructure as code using Terraform. Strong programming skills in Python. Experience with version control systems, especially Git. Solid understanding of cloud security, encryption, and secrets management. Proven track record in designing and operating scalable, reliable, and secure cloud platforms. Excellent problem-solving skills, attention to detail, and ability to communicate effectively with stakeholders. Experience with GenAI or AI/ML platforms is a plus.
    $70k-89k yearly est. 1d ago
  • Quantitative Engineer

    KMM Technologies, Inc. 3.4company rating

    Reston, VA jobs

    Quantitative Engineer (Testing, Automation & Model Validation) Duration: Long Term MUST HAVE: Python Development AWS Cloud Services Finance / Mortgage Job Description: Quantitative Engineer (Testing, Automation & Model Validation) We are seeking a Quantitative Engineer for a contract position with strong expertise in testing, automation, and quantitative analysis to join the MFCW application team. The ideal candidate will have hands-on experience with AWS cloud services, Python development, and a solid understanding of software testing methodologies combined with quantitative modeling principles. This role involves designing and implementing automated test frameworks, validating complex models, generating test cases, performing functional and regression testing, and maintaining comprehensive documentation to ensure accuracy and reliability of analytical systems. ________________________________________ Responsibilities • Automated Testing & Frameworks Design, develop, and maintain automated test scripts and frameworks for regression and performance testing of quantitative models and applications. • Test Case Generation & Execution Generate detailed test cases based on functional requirements, model specifications, and quantitative assumptions. • Model Vetting & Validation Perform rigorous testing of financial and analytical models, including sensitivity analysis, back-testing, and stress testing in AWS environments. • Data Setup & Simulation Prepare datasets and execute model runs in AWS for validation and scenario analysis. • Defect Resolution & Optimization Collaborate with development and modeling teams to identify, troubleshoot, and resolve defects in model logic and automation workflows. • Documentation & Governance Document test plans, validation results, and maintain compliance with model governance standards. • Continuous Improvement Enhance and optimize existing test automation processes, ensuring scalability and reproducibility. • Cloud Integration Utilize AWS services (EC2, S3, Lambda) for deployment, testing environments, and automation pipelines. • Quality Assurance Ensure code quality and testing rigor through best practices and continuous integration. ________________________________________ Required Skills & Qualifications • Education: Bachelor's degree in Computer Science, Engineering, Applied Mathematics, or related discipline. Minimum 8+ years of experience in software testing, automation, or quantitative engineering. • Programming: Strong proficiency in Python for development and automation. Exposure to Java and R is desirable. • Cloud Expertise: Hands-on experience with AWS services (EC2, S3, Lambda) and cloud-based testing environments. • Testing Expertise: o Ability to design and execute unit tests, integration tests, regression tests, and model validation tests. o Experience in creating and managing test cases and documentation. • Automation: Knowledge of test automation frameworks and tools (e.g., PyTest, Selenium, or similar). Familiarity with microservices and distributed systems is a plus. • Quantitative Analysis: Understanding of statistical testing, model risk concepts, and performance metrics for analytical models. • Version Control & CI/CD: Familiarity with Git and continuous integration/deployment pipelines. • Soft Skills: Strong analytical and problem-solving skills, excellent communication, and documentation abilities.
    $74k-98k yearly est. 1d ago
  • Data Engineer

    Pyramid Consulting, Inc. 4.1company rating

    McLean, VA jobs

    Immediate need for a talented Data Engineer. This is a 12 months contract opportunity with long-term potential and is located in Mclean, VA(Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-93504 Pay Range: $70 - $75/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Design, develop, and maintain data pipelines leveraging Python, Spark/PySpark, and cloud-native services. Build and optimize data workflows, ETL processes, and transformations for large-scale structured and semi-structured datasets. Write advanced and efficient SQL queries against Snowflake, including joins, window functions, and performance tuning. Develop backend and automation tools using Golang and/or Python as needed. Implement scalable, secure, and high-quality data solutions across AWS services such as S3, Lambda, Glue, Step Functions, EMR, and CloudWatch. Troubleshoot complex production data issues, including pipeline failures, data quality gaps, and cloud environment challenges. Perform root-cause analysis and implement automation to prevent recurring issues. Collaborate with data scientists, analysts, platform engineers, and product teams to enable reliable, high-quality data access. Ensure compliance with enterprise governance, data quality, and cloud security standards. Participate in Agile ceremonies, code reviews, and DevOps practices to ensure high engineering quality. Key Requirements and Technology Experience: Skills-Data Engineer- Python , Spark/PySpark, AWS, Golang, Able to write complex SQL queries against Snowflake tables / Troubleshoot issues, Java/Python, AWS (Glue, EC2, Lambda). Proficiency in Python with experience building scalable data pipelines or ETL processes. Strong hands-on experience with Spark/PySpark for distributed data processing. Experience writing complex SQL queries (Snowflake preferred), including optimization and performance tuning. Working knowledge of AWS cloud services used in data engineering (S3, Glue, Lambda, EMR, Step Functions, CloudWatch, IAM). Experience with Golang for scripting, backend services, or performance-critical processes. Strong debugging, troubleshooting, and analytical skills across cloud and data ecosystems. Familiarity with CI/CD workflows, Git, and automated testing. Our client is a leading Banking and Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $70-75 hourly 3d ago
  • DevOps Engineer (Cloud & CI/CD)

    Pyramid Consulting, Inc. 4.1company rating

    Charlotte, NC jobs

    Immediate need for a talented DevOps Engineer (Cloud & CI/CD). This is a 12 months opportunity with long-term potential and is located in Charlotte , NC(Hybrid). Please review the job description below and contact me ASAP if you are interested. Job Diva ID: 25-91664 Pay Range: $60 - $65 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: - Design, implement, and manage CI/CD pipelines to support continuous integration and continuous deployment. Automate infrastructure provisioning using IaC tools such as Terraform, CloudFormation, or Ansible. Manage cloud infrastructure on platforms like AWS, Azure, or GCP. Monitor application performance using logging and monitoring tools (Prometheus, Grafana, ELK, Splunk). Implement configuration management using Ansible, Puppet, or Chef. Maintain and optimize containerization and orchestration (Docker, Kubernetes). Ensure system security through compliance checks, patching, and vulnerability assessments. Collaborate with development teams to troubleshoot issues across the entire stack. Improve automation efficiency, deployment speed, and operational reliability. Maintain documentation for systems, processes, and configurations. Key Requirements and Technology Experience; Key Skills; Docker, CI/CD, Cloud Strong analytical, troubleshooting and problem solving skills Jenkins or other CI system Cloud Engineering(cloud computing) experience with Openshift,AWS, Azure or GCP Design and maintain CI/CD process and tools Docker experience with Kubernetes container orchestration Experience with yaml config management tools like Helm/Ansible Setting up monitoring and alerts Working with software developers and software engineers to ensure that development follows established processes and works as intended Perform root cause analysis of production errors and resolve technical issues Experience with Agile processes Our client is a leading Banking Industry , and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $60-65 hourly 2d ago
  • Principal Big Data Engineer

    Pyramid Consulting, Inc. 4.1company rating

    Durham, NC jobs

    Immediate need for a talented Principal Big Data Engineer. This is a 12+ months contract opportunity with long-term potential and is located in Durham, NC(Onsite). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-94747 Pay Range: $63 - $73 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: We are seeking a highly motivated Data Engineer to join the Data Aggregation team. Data Aggregation is a growing area and we are looking for a skilled engineer to drive design and development of industry leading external facing API solutions. The comprehensive API / data solutions will seek to bring together retail, clearing and custody capabilities to help external fintech partners with the financial goal planning, investment advice and financial projections capabilities to better serve our clients and more efficiently partner with them to accomplish their financial objectives. Key Requirements and Technology Experience: Bachelor's or Master's Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required with 10 years of working experience Big Data Processing: Apache Spark (EMR), Scala, distributed computing, performance optimization Cloud & Infrastructure: AWS (S3, EMR, EC2, Lambda, Step Functions), multi-region DR strategy Databases: Cassandra/YugaByte (NoSQL), Oracle, PostgreSQL, Snowflake Data Pipeline: ETL design, API integration, batch processing DevOps & CI/CD: Jenkins, Docker, Kubernetes, Terraform, Git Monitoring & Observability: Splunk, Datadog APM, Grafana, CloudWatch Orchestration: Control-M job scheduling, workflow automation Financial domain experience Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $63-73 hourly 2d ago
  • Senior Data Engineer.

    Pyramid Consulting, Inc. 4.1company rating

    McLean, VA jobs

    Immediate need for a talented Senior Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Mclean, VA(Remote). Please review the job description below and contact me ASAP if you are interested. Job ID: 25-84666 Pay Range: $64 - $68/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Responsibilities: Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Client, Databricks or Redshift. Build data integration solutions between transaction systems and analytics platforms. Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs. Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting. Fundamental understanding of building of data products by data enrichment and ML. Act as a team player and share knowledge with the existing team members. Key Requirements and Technology Experience: Key skills; Python, AWS, SNOWFLAKE Bachelor's degree in computer science or a related field. Minimum 5 years of experience in building data driven solutions. At least 3 years of experience working with AWS services. Applicants must be authorized to work in the US without requiring employer sponsorship currently or in the future. U.S. FinTech does not offer H-1B sponsorship for this position. Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools. Strong scripting experience using Python and SQL. Working knowledge of foundational AWS compute, storage, networking and IAM. Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus. Solid scripting experience in AWS using Lambda functions. Knowledge of CloudFormation template preferred. Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Client. Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services. Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions. Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc. Strong understanding of data security - authorization, authentication, encryption, and network security. Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc. preferred. Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred. Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts. Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures. Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables. Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions. Our client is a leading Financial Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $64-68 hourly 5d ago
  • Senior Data Engineer

    Synechron 4.4company rating

    Charlotte, NC jobs

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our Challenge: Looking for skilled senior data engineer with comprehensive experience in designing, developing, and maintaining scalable data solutions within the financial and regulatory domains. Proven expertise in leading end-to-end data architectures, integrating diverse data sources, and ensuring data quality and accuracy. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $155k/year & benefits (see below). Work location: New York City, NY (Hybrid, 3 days in a week) The Role Responsibilities: Advanced proficiency in Python, SQL Server, Snowflake, Azure Databricks, and PySpark. Strong understanding of relational databases, ETL processes, and data modeling. Expertise in system design, architecture, and implementing robust data pipelines. Hands-on experience with data validation, quality checks, and automation tools (Autosys, Control-M). Skilled in Agile methodologies, SDLC processes, and CI/CD pipelines. Effective communicator with the ability to collaborate with business analysts, users, and global teams. Requirements: Overall 10+ years of IT experience is required Collaborate with business stakeholders to gather technical specifications and translate business requirements into technical solutions. Develop and optimize data models and schemas for efficient data integration and analysis. Lead application development involving Python, Pyspark, SQL, Snowflake and Databricks platforms. Implement data validation procedures to maintain high data quality standards. Strong experience in SQL (Writing complex queries, Join, Tables etc.) Conduct comprehensive testing (UT, SIT, UAT) alongside business and testing teams. Provide ongoing support, troubleshooting, and maintenance in production environments. Contribute to architecture and design discussions to ensure scalable, maintainable data solutions. Experience with financial systems (capital markets, credit risk, and regulatory compliance applications). We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-155k yearly 3d ago
  • Cloud Data Engineer- Databricks

    Infocepts 3.7company rating

    McLean, VA jobs

    Purpose: We are seeking a highly skilled Cloud Data Engineer with deep expertise in Databricks and modern cloud platforms such as AWS, Azure, or GCP. This role is ideal for professionals who are passionate about building next-generation data platforms, optimizing complex data workflows, and enabling advanced analytics and AI in cloud-native environments. You'll have the opportunity to work with Fortune-500 organizations in data and analytics, helping them unlock the full potential of their data through innovative, scalable solutions. Key Result Areas and Activities: Design and implement robust, scalable data engineering solutions. Build and optimize data pipelines using Databricks, including serverless capabilities, Unity Catalog, and Mosaic AI. Collaborate with analytics and AI teams to enable real-time and batch data workflows. Support and improve cloud-native data platforms (AWS, Azure, GCP). Ensure adherence to best practices in data modeling, warehousing, and governance. Contribute to automation of data workflows using CI/CD, DevOps, or DataOps practices. Implement and maintain workflow orchestration tools like Apache Airflow and dbt. Roles & Responsibilities Essential Skills 4+ years of experience in data engineering with a focus on scalable solutions. Strong hands-on experience with Databricks in a cloud environment. Proficiency in Spark and Python for data processing. Solid understanding of data modeling, data warehousing, and architecture principles. Experience working with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with CI/CD pipelines and data workflow automation. Desirable Skills Direct experience with Unity Catalog and Mosaic AI within Databricks. Working knowledge of DevOps/DataOps principles in a data engineering context. Exposure to Apache Airflow, dbt, and modern data orchestration frameworks. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in cloud platforms (AWS/Azure/GCP) or Databricks are a plus. Qualities: Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Able to work seamlessly with clients across multiple geographies Research focused mindset Excellent analytical, presentation, reporting, documentation and interactive skills "Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
    $77k-105k yearly est. 5d ago
  • AWS Data Engineer

    Capgemini 4.5company rating

    Charlotte, NC jobs

    We are looking for a skilled and experienced AWS Data Engineer with 10+ Years of experience to join our team. This role requires hands-on expertise in AWS serverless technologies, Big Data platforms, and automation tools. The ideal candidate will be responsible for designing scalable data pipelines, managing cloud infrastructure, and enabling secure, reliable data operations across marketing and analytics platforms. Key Responsibilities: Design, build, and deploy automated CI/CD pipelines for data and application workflows. Analyze and enhance existing data pipelines for performance and scalability. Develop semantic data models to support activation and analytical use cases. Document data structures and metadata using Collibra or similar tools. Ensure high data quality, availability, and integrity across platforms. Apply SRE and DevSecOps principles to improve system reliability and security. Manage security operations within AWS cloud environments. Configure and automate applications on AWS instances. Oversee all aspects of infrastructure management, including provisioning and monitoring. Schedule and automate jobs using tools like Step Functions, Lambda, Glue, etc. Required Skills & Experience: Hands-on experience with AWS serverless technologies: Lambda, Glue, Step Functions, S3, RDS, DynamoDB, Athena, CloudFormation, CloudWatch Logs. Proficiency in Confluent Kafka, Splunk, and Ansible. Strong command of SQL and scripting languages: Python, R, Spark. Familiarity with data formats: JSON, XML, Parquet, Avro. Experience in Big Data engineering and cloud-native data platforms. Functional knowledge of marketing platforms such as Adobe, Salesforce Marketing Cloud, and Unica/Interact (nice to have). Preferred Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or related field. AWS, Big Data, or DevOps certifications are a plus. Experience working in hybrid cloud environments and agile teams. Life at Capgemini Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work Healthcare including dental, vision, mental health, and well-being programs Financial well-being programs such as 401(k) and Employee Share Ownership Plan Paid time off and paid holidays Paid parental leave Family building benefits like adoption assistance, surrogacy, and cryopreservation Social well-being benefits like subsidized back-up child/elder care and tutoring Mentoring, coaching and learning programs Employee Resource Groups Disaster Relief Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please get in touch with your recruiting contact. Click the following link for more information on your rights as an Applicant ************************************************************************** Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
    $77k-99k yearly est. 5d ago
  • Data Engineer with Hadoop Expertise

    Capgemini 4.5company rating

    Cary, NC jobs

    We are seeking a highly skilled and motivated Data Engineer with strong hands-on experience in the Hadoop Big Data ecosystem. The ideal candidate will have deep expertise in big data technologies and frameworks, and a passion for building scalable data solutions that drive business insights. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Hadoop and related technologies. Work with large-scale datasets using tools such as Spark, Scala, Hive, Pig, and HBase. Implement and optimize data workflows and ETL processes. Collaborate with data scientists, analysts, and other engineering teams to support data needs. Ensure data quality, integrity, and governance across platforms. Troubleshoot and resolve issues related to data processing and performance. Required Skills & Qualifications: Strong hands-on experience as a Hadoop Developer or Big Data Engineer. Advanced knowledge of the Hadoop ecosystem and its components. In-depth expertise in Scala, Spark, and similar frameworks. Proficient in HBase, SOLR, Hive, Gremlin, and Pig. Experience with data modeling, performance tuning, and distributed computing. Nice to Have: Experience with Spark Streaming, Flume, or Apache NiFi. Exposure to Azure Data Engineering stack, including: Azure Synapse Analytics Azure Data Factory (ADF) Azure Databricks Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $78k-100k yearly est. 4d ago
  • Sr. Data Engineer (SQL+Python+AWS)

    SGS Technologie 3.5company rating

    Saint Petersburg, FL jobs

    looking for a Sr. Data Engineer (SQL+Python+AWS) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = Hybrid at St. Petersburg, FL 33716 with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders. Notes from the Hiring Manager: • Setting up Python environments and data structures to support the Data Science/ML team. • No prior Data Science or Machine Learning experience required. • Role involves building new data pipelines and managing file-loading connections. • Strong SQL skills are essential. • Contract-to-hire position. • Hybrid role based in St. Pete, FL (33716) only. Duties: This role is building and maintaining data pipelines that connect Oracle-based source systems to AWS cloud environments, to provide well-structured data for analysis and machine learning in AWS SageMaker. It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics. • Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.). • Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for SageMaker-based machine learning workloads. • Implement and manage data ingestion frameworks, including batch and streaming pipelines. • Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow. • Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency. • Optimize data processes for performance and cost efficiency. • Implement data quality checks, validation, and governance standards. • Work with DevOps and security teams to comply with RJ standards. Skills: Required: • Strong proficiency with SQL and hands-on experience working with Oracle databases. • Experience designing and implementing ETL/ELT pipelines and data workflows. • Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM. • Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.). • Solid understanding of data modeling, relational databases, and schema design. • Familiarity with version control, CI/CD, and automation practices. • Ability to collaborate with data scientists to align data structures with model and analytics requirements Preferred: • Experience integrating data for use in AWS SageMaker or other ML platforms. • Exposure to MLOps or ML pipeline orchestration. • Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation). • Knowledge of data warehouse design patterns and best practices. • Experience with data orchestration tools (e.g., Apache Airflow, Step Functions). • Working knowledge of Java is a plus. Education: B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
    $71k-91k yearly est. 2d ago
  • Snowflake Data Engineer

    Covetus 3.8company rating

    Durham, NC jobs

    Experience in developing and proficient in SQL and knowledge on Snowflake cloud computing environments Knowledge on Data warehousing concepts and metadata management Experience with data modeling, Data lakes multi-dimensional models and data dictionaries Hands-on experience with Snowflake features like Time Travel and Zero-Copy Cloning. Experience in query performance tuning and cost optimization in a cloud data platform Knowledge in Snowflake warehousing, architecture, processing and administration , DBT , Pipeline Hands-on experience on PLSQL Snowflake •Excellent personal communication, leadership, and organizational skills. •Should be well versed with various Design patterns Knowledge of SQL database is a plus Hands-on Snowflake development experience is must Work with various cross-functional groups, tech leads from other tracks Need to work with team closely and guide them technically/functionally Must be a team player with good attitude
    $74k-97k yearly est. 1d ago
  • Senior Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    McLean, VA jobs

    The candidate must have 5+ years of hands on experience working with PySpark/Python, microservices architecture, AWS EKS, SQL, Postgres, DB2, Snowflake, Behave OR Cucumber frameworks, Pytest (unit testing), automation testing and regression testing. Experience with tools such as Jenkins, SonarQube AND/OR Fortify are preferred for this role. Experience in Angular and DevOps are nice to haves for this role. Must Have Qualifications: PySpark/Python based microservices, AWS EKS, Postgres SQL Database, Behave/Cucumber for automation, Pytest, Snowflake, Jenkins, SonarQube and Fortify. Responsibilities: Development of microservices based on Python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project. New System: Python and PySpark, AWS Postgres DB, Behave/Cucumber for automation, and Pytest Perform System, functional and data analysis on the current system and create technical/functional requirement documents. Current System: Informatica, SAS, AutoSys, DB2 Write automated tests using Behave/cucumber, based on the new micro-services-based architecture Promote top code quality and solve issues related to performance tuning and scalability. Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify. Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings. Preferred strong skills and experience in reporting applications development and data analysis Knowledge in Agile methodologies and technical documentation.
    $77k-109k yearly est. 4d ago
  • Android Mobile Reverse Engineer: All Levels (Requires Applicants Currently Hold TS Level Clearance) - FUTURE NEED

    Cipher Tech Solutions 4.3company rating

    Requirements engineer job at Cipher

    Reverse engineering Android malware, defeating obfuscation techniques and collecting evidence of malicious behavior. Provide support in the form of scripting to enable better application triage and value-added reverse engineering expertise. Integrate existing scripts into our analysis pipelines and develop new scripts in response to customer specifications. Reverse Engineers will be working primarily with decompiled Java but will occasionally encounter Java/Smali bytecode and native libraries that require familiarity with the ARM assembly language. Regardless of whether you are applying to a software developer or reverse engineer position, if your application is approved, we will test you on both skills as part of our hiring process. At this time, the majority of our open positions require an aptitude in both domains. Core Position Requirements: * US Citizenship required (no exceptions) * Applicants MUST hold a TS (Top Secret) Level US Government Security Clearance at time of application. * Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, Cyber Security, or Math A minor in one of the above degree types is also acceptable if degree major is a hard STEM discipline. Technical Requirements: * Must possess strong software development skills * Must be able to read and/or write x86, x64, and/or ARM Assembly code * Able to create and debug scripts that enable reverse engineering * Familiar with maintainable software processes, including: software documentation, source code versioning, unit testing * Able to convey technical topics verbally, in source code comments, and documentation. * Ability to work independently and as part of a team in a fast-paced environment with ever-changing requirements * Possess one of the following * Experience developing in Java and an interest in learning about Java bytecode and ARM assembly * Experience reverse engineering binaries with Ghidra or IDA Pro Desirable: * Prior Android development experience. * Experience reverse engineering mobile applications * Familiarity with APK decompilation tools such as JEB, JADX or Dex2Jar * Experience reverse engineering native ARM binaries with IDA Pro or Ghidra * Familiarity with the ARM assembly language * Familiar with hex editors and working at the byte-level * Platform specific mobile development language experience using Kotlin Key Qualities: * Self-motivated, detail-oriented, strong work ethic * Able to quickly adapt to new situations and problems * Able to operate in a mission-critical and time-sensitive environments * Able to work in a team environment Benefits: * Mission focused work environment * Subsidized medical/dental/vision insurance * HSA contributions * 401k matching * Holiday and Paid Time Off (PTO) * Monthly WAN parties & quarterly events * Flexible hours Remote work is not an option. We do offer relocation assistance. Please see our careers page or inquire with your recruiter about the terms and conditions. Salary and other Compensation: The hourly rate of pay ranges from $43.27 - $96.15 The salary range for this position is subject not only to the hire's skills, education, expertise, and years of experience, but also to features of the final position offered to the hire: including, but not limited to the location of the position, clearance required for the position, and the contract associated with the position. See our Perks & Benefits page for a general description of benefits and other potential compensation benefits: *********************************************************** About Us: At Cipher Tech Solutions, Inc. (Cipher Tech), we take great pride in supporting some of the most mission critical work in the Department of Defense and the Intelligence Community. The majority of our company is comprised of extremely talented software developers who write production tools which directly support one of two missions: digital forensics and malware reverse engineering. Digital forensics is the practice of extracting and processing evidence from computers, phones, tablets, cameras, flash-drives, and just about any other form of digital media. The results of our work are then used to support federal and military law enforcement, counterintelligence, and counter terrorist activities. Malware reverse engineering is the practice of analyzing malicious code. The goal of a reverse engineer is to understand how the malicious code works and determine facts such as: * How did it infect the victim's computer *What kind of information it is seeking to steal *How is it communicating with the author (actor) *Who that actor might be. Vaccination Notice: Please be aware that you are applying for a position to work as a federal contractor. As such, Cipher Tech will require, and your employment will be subject to, such vaccinations as are required by federal, state, and local law requirements. Important Security Clearance Information: Be aware that you are applying for a job that requires a U.S. Government Security Clearance. The U.S. government will not issue security clearances to recent users of illegal drugs. On your application to us, you must acknowledge that you have not engaged in any illegal drug use within the past twelve months (illegal drug use includes, but is not limited to, the use of illegal substances, the misuse of prescriptions, the misuse of over-the-counter substances, and the use of medical or recreational marijuana that may be deemed legal at the state level but is still considered illegal by the federal government). NOTE: CIPHER TECH WILL NOT DISCRIMINATE AGAINST ANY QUALIFIED APPLICANTS WITH A DISABILITY, INCLUDING ILLEGAL DRUG USERS WHO HAVE RECENTLY SOUGHT PROFESSIONAL TREATMENT FOR A DRUG PROBLEM. * - Cipher Tech is an Equal Opportunity Employer. All hiring and employment decisions are made on the basis of business requirements, performance, and merit with no regard to race, color, religion, sexual orientation, gender, gender identity, national origin, or other protected status. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or national origin.
    $43.3-96.2 hourly 20d ago

Learn more about Cipher jobs