You are viewing a preview of this job. Log in or register to view more details about this job.

Associate D&A Engineer - Data Engineering for AI 2020 National

Responsibilities & Qualifications
KPMG is currently seeking an Associate for our Lighthouse - Data & Analytics Engineer – Data Engineering for AI practice.

While this requisition may state a specific geographic office, please note that our positions are location flexible between our major hubs. Opportunities may include, but are not limited to, Atlanta, Chicago, Dallas, Denver, New York City, Orange County, Philadelphia, Seattle, Washington DC. Please proceed with applying here, and let us know your location preference during interview phase if applicable.

Responsibilities:
• Rapidly prototype, develop, and optimize data science and AI implementations to tackle the BI/EDW/Big Data and Data Science needs for a variety of Fortune 1000 corporations and other major organizations who are KPMG clients.
• Develop and maintain Data & Analytics solutions on premise/cloud/KPMG-hosted/hybrid infrastructure. Be the team champion of some mainstream BI/EDW/Big Data toolsets like Tableau, Alteryx, Informatica, Pentaho, Er-Win, and Power Designer. Own data in cross-disciplinary teams. Build logical/physical data models.
• Discover, profile, acquire, process, model, and own data for the solutions. Implement data processing pipelines, data mining/science algorithms, and visualization engineering to help clients distill insights from rich data sources, including social media, news, internal/external documents, emails, financial data, client data, and operational data.
• Help in research and experiment of leading/emerging BI/EDW/Big Data methodologies, such as serverless data lake, AWS Redshift, Athena, Glue, GCP BigQuery, and MS PowerBI and apply them to real client solutions. From data engineering point of view, help in the process for pursuing innovations, target solutions, and extendable platforms for Lighthouse, KPMG, and clients.

Qualifications:
• Bachelors, Masters or PhD from an accredited college or university in Computer Science, Computer Engineering, or related fields. Preferred: relevant work experience with data engineering exposure in related industries, ideally professional services.
• Ability to pick up and learn new technologies quickly. Working knowledge of RDBMS design, data modeling, MPP EDW system implementation.
• Strong familiarity with: BI/EDW/Big Data toolsets (Tableau, Alteryx, Informatica, Pentaho, Er-Win, and Power Designer); knowledge of Linux/Unix/Windows/.NET; market-leading fluency of SQL and several programming languages (Bash/ksh/PowerShell; Python/Perl/R); hands-on knowledge of distributed computing architecture, including massive-parallel processing big data platforms like Hadoop, MapReduce, HDFS, Spark, Hive/Impala, H-Base/MongoDB/Casandra, Teradata/Netezza/Redshift; hands-on exposure to mainstream cloud infrastructures (Amazon Web Services, Azure, and Google Cloud Platform) and microservices; ability to implement data lake and serverless data lake.
• Ability to travel up to 80% of the time, depending on project assignments.
• Targeted graduation date Fall 2019 through Summer 2020
Work Authorization
Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.