Lowes Home Improvement

Data Scientist - NLP

  • Job Type: Full Time
  • Industry Type: IT Sector
  • Industry Location: Charlotte
  • Experience: NA
  • No. of Positions: 1
  • Primary Skills: NLP Engineering Computer Software SDLC Java Python
  • Secondary Skills: SQL Teradata Hub Apache IT Library
  • Job Location: Charlotte, North Carolina
  • Posted Date: Posted today
Job Description

Job Summary:
Would you like to work for a pioneering AI solutions company delivering the next generation of efficient customer experience at Lowes? We are seeking an enthusiastic and talented upper- mid-level to senior NLP Data Scientist to join our team. Our team is driven to make a difference for the customers and business partners we serve. We are looking for an experienced and motivated Data Scientist (NLP) to join our Data Science team. In this role, you will be working with other data scientists and engineers to help build our unstructured data extraction pipeline. You will be responsible for leveraging the latest machine learning and natural language processing technology to structure and normalize data. The ideal candidate will have no issue digging into messy data, working with business partners and subject matter experts to develop annotation guidelines to produce high-quality machine learning models. As a data scientist at Lowes, you will have the opportunity to touch all parts of the machine learning project lifecycle from dataset curation to model deployment.

We are seeking a scientist with a background in NLP or computational linguist or experience with conversational interfaces and some development skills. PhD level would be great with academic projects in NLP.

Key Responsibilities:

  • Translates complex cross-functional business requirements and functional specifications into logical program designs, modules, stable application systems, and data solutions; partners with Product Team to understand business needs and functional specifications
  • Collaborates with cross-functional teams to ensure specifications are converted into flexible, scalable, and maintainable solution designs; evaluates project deliverables to ensure they meet specifications and architectural standards
  • Guides development teams in the design and build of complex Data or Platform solutions and ensures that teams are in alignment with the architecture blueprint, standards, target state architecture, and strategies
  • Coordinates, executes, and participates in component integration (CIT) scenarios, systems integration testing (SIT), and user acceptance testing (UAT) to identify application errors and to ensure quality software deployment
  • Participates and coaches others in all software development end-to-end product lifecycle phases by applying and sharing an in-depth understanding of complex company and industry methodologies, policies, standards, and controls
  • Has solid grasp of software design patterns and approaches; understands application level software architecture; makes technical trade-off decisions at application level
  • Automates and simplifies team development, test, and operations processes; develops detailed architecture plans for large scale enterprise architecture projects and drives the plans to fruition
  • Solves complex architecture/design and business problems; solutions are extensible; works to simplify, optimize, remove bottlenecks, etc.
  • Provides mentoring and guidance to more junior level engineers; may provide feedback and direction on specific engineering tasks



Data Engineering Responsibilities

  • Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure; demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capability
  • Excels in one or more domains; understands pipelines and business metrics and develops expertise on cloud-based data stacks and pipeline development
  • Builds, tests and enhances data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
  • Supports the development of feature / inputs for the data models in an Agile manner; Hosts Model Via Rest APIs; ensures non-functional requirements such as logging, authentication, error capturing, and concurrency management are accounted for when model hosting
  • Works with Data Science team to understand mathematical models and algorithms; recommends improvements to analytic methods, techniques, standards, policies and procedures; participates in continuous improvement activities including training opportunities; trains others
  • Works to ensure the manipulation and administration of data and systems are secure and in accordance with enterprise governance by staying compliant with industry best practices, enterprise standards, corporate policy and department procedures; handles the manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards
  • Maintains the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensures high availability of the platform; monitors workload demands; works with Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application



Minimum Qualifications:

  • Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
  • 5 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
  • 4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)



Data Engineering Qualifications

  • 3 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
  • Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)



Preferred Qualifications:

  • In most cases Lowe's will not be able to provide sponsorship for roles located in the Tech Hub
  • Experience with Apache Presto and/or Apache Druid
  • Master's Degree in Computer Science, CIS, or related field
  • 5 years of IT experience developing and implementing business systems within an organization
  • 5 years of experience working with defect or incident tracking software
  • 5 years of experience writing technical documentation in a software development environment
  • 3 years of experience working with an IT Infrastructure Library (ITIL) framework
  • 3 years of experience leading teams, with or without direct reports
  • 5 years of experience working with source code control systems
  • Experience working with Continuous Integration/Continuous Deployment tools
  • 5 years of experience in systems analysis, including defining technical requirements and performing high level design for complex solutions



Lowe's is an equal opportunity affirmative action employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.

Relevant Job Openings
CRM
Python Developer or Devops Engineer
CRM
Azure Data Architect with Talend
Azure Cloud Architect
Jr. Java Developer