Apply Here

  • Job ID:

    Job-1853
  • Job Title:

    Data Integration Developer
  • Location:

    Remote (USA)
  • Duration:

    1 year +
  • Job Description:

    Top 3-5 Must Haves
    • Experience working with large-scale data pipelines and cloud infrastructure ( cloud ETL tools like Glue etc., and Data Warehousing solutions like Redshift etc., )
    • Knowledge of Deploying and maintaining cloud-based infrastructure for data workflows (e.g., AWS, GCP, Azure, RedShift)
    • 2 years of technical expertise in Cloud applications, Data ingestion, and Databricks Data Lakehouse platform
    • 4 years of extensive hands-on experience in building ETL interfaces using DataStage version 11.7 to aggregate, cleanse and migrate data across enterprise-wide Big Data, and Data Warehousing systems using staged data processing techniques, patterns, and best practices. 
    • Combined 4 years of experience of advanced SQL and stored procedures (in DB2, SQL Server, and Oracle Database Platforms) with hands-on experience of designing solutions for optimal performance and handling other non-functional aspects of availability, reliability, and security of DATASTAGE ETL Platform.
    DELIVERABLES OR TASKS:
    • Recommend the good practice on ETL process to extract data from disparate source transaction systems, transform and enrich the data, and load it into specific data models that are optimized for analysis and reporting.
    • Facilitate the best-in-class data integration platform.
    • Recommend and implement features in Data Stage 11.7 for Optimization.
    • Recommend the training and mentoring plans for the transition of knowledge to current employees.
    • Delivery of quality results of tasks and assignments.

    TECHNICAL KNOWLEDGE AND SKILLS:
    • Strong analytical skills with the ability to analyze information identify and formulate solutions to problems. Provides more in-depth analysis with a high-level view of goals and end deliverables.
    • Over 5 years of proven work experience in DataStage, with version 11.0 or above, including over two years of DataStage version 11.7 is a must.
    • Over 3 years of proven work experience on scripting languages such as Perl, Shell, and Linux/Unix servers, files structure, and scheduling.
    • Extensive hands-on experience in building ETL interfaces using DataStage version 11.7 to aggregate, cleanse and migrate data across enterprise-wide Big Data, and Data Warehousing systems using staged data processing techniques, patterns and best practices.
    • Setting up interfaces from DataStage server to different platforms/tools like Hadoop Data Lake, SAS executions, MicroStrategy Command Manager etc.
    • Experience working with large-scale data pipelines and cloud infrastructure ( cloud ETL tools like Glue etc., and Data Warehousing solutions like Redshift etc.)
    • Strong technical expertise in Cloud applications, Data ingestion, and Data Lake architecture.
    • Strong experience in full life cycle management of capturing, versioning and migrating various DataStage ETL metadata including data mapping and other data integration artifacts (such as schedulers, scripts, etc.) across environments using vendor platforms such as Data Stage, or other equivalent tools (including open source) by establishing standards, guidelines and best practices.
    • Combined experience of advanced SQL and stored procedures (in DB2, SQL Server, and Oracle Database Platforms) with hands-on experience of designing solutions for optimal performance and handling other non-functional aspects of availability, reliability and security of DataStage ETL Platform.
    • Proficiency on working with Unix and Linux servers, jobs scheduling, as well as script languages such as C, shell script (sh), AWK, and sed.
    • Experience with both normalized and dimensional data models, hands-on knowledge of other data integration techniques such as database replication, change data capture (CDC) etc. and familiarity of SOA and ESB technologies and patterns.
    • Working knowledge of DataStage Administration and best practices is highly recommended.
    • Technical knowledge in predictive analytics architecture and in development and deployment of predictive models is a plus.
  • Job Type:

    Contract

 

Hear what our consultants have to say about us…

Saigeetha Govi

VP of Engineering

“Working with Buxton was awesome, recruiters were very attentive to the needs of my team. They listened and changed the strategy for screening candidates based on my team’s needs. They were very flexible in terms of the contract given Spigit was a very small company. I would love to utilize their services again in the future.”

VijayBalakrishna Girija

Windows Engineer

“I have been working at Buxton Consulting for more than 3 years. Salary is paid on time and proper response within an hour for any queries, Team members and Project managers are very helpful.”