Senior Data Architect
Apply NowLocation:
unset
Company:
Ferguson is a leading distributor of plumbing supplies and other products, serving various industries since 1953.
Summary:
As a Senior Data Architect, you will develop and maintain data architecture and processes within Ferguson's Enterprise Data & Analytics team. Candidates require over five years of data-related experience, expertise in BI tools, SQL proficiency, and cloud platform knowledge.
Requirements:
Technology: Power BI, SQL, Python, Azure
Experience: 5+ years of overall data-related experience with a strong emphasis on data engineering and modeling preferred., Experience modeling data for analytics and reporting in popular BI tools like Power BI, Tableau, QlikView, Looker, or others is required., Experience writing and troubleshooting SQL queries is required, and experience with other common data manipulation languages like Python is highly desirable., Experience performing EDA activities on both structured (tabular) and semi-structured (document) data, including assessing data quality and completeness, Experience creating and applying standardized data architecture documentation like data source definitions, source-to-target mappings, and entity relationship diagrams, Experience in building batch or real time ETL pipelines with an emphasis on data processing standardization and reusability., Experience architecting and delivering solutions in a cloud platform (Azure, AWS, and/or GCP) with an emphasis on data orchestration and enrichment. Experience on Azure is preferred., Experience with traditional RDBMS solutions like SQL Server, Oracle, Postgres, and/or MySQL is required., Experience with big data platforms like Azure Synapse, Databricks, Snowflake, Amazon Redshift, and/or Google BigQuery is preferred., Experience with NoSQL databases like Azure Cosmos, Amazon DynamoDB and/or MongoDB is desirable., Experience working with data scientists and machine learning engineers applying machine learning models is desirable., Experience with real time/near real time streaming platforms like Azure Event Hubs, Apache Kafka, and/or Amazon Kinesis is desirable.
Job Description:
Since 1953, Ferguson has been a source of quality supplies for a variety of industries. Together We Build Better infrastructure, better homes and better businesses. We exist to make our customers’ complex projects simple, successful, and sustainable. We proactively solve problems, adapt and grow to continuously serve our customers, communities and each other. Ferguson is proud to provide best-in-class products, service and capabilities across the following industries: Commercial/Mechanical, Facilities Supply, Fire and Fabrication, HVAC, Industrial, Residential Trade, Residential Building and Remodel, Waterworks and Residential Digital Commerce. Ferguson has approximately 36,000 associates across 1,700 locations. Ferguson is a community of proud associates who operate with the shared purpose of building something meaningful. You will build a career that you are proud of, at a company you can believe in.
Duties and Responsibilities:
- Data Architecture: Develop and maintain the overall data architecture strategy, including data integration, data warehousing, and data governance. Ensure alignment with business goals and objectives.
- Data Modeling: Design and develop robust data models that support reporting and analytics requirements, specifically for use in reporting and analytics dashboards like Power BI.
- ETL Processes: Design and implement efficient ELT processes to integrate data from various sources into the data lake. Optimize data pipelines for performance and scalability.
- Data Quality: Establish and enforce data quality standards and best practices. Implement data validation and cleansing processes to ensure high data quality and reliability.
- Collaboration: Work closely with business analysts, product owners, data scientists, and other collaborators to understand data requirements and translate them into technical solutions. Provide guidance and support to development teams on data-related issues.
- Documentation: Create, maintain, and review comprehensive documentation of data models, data flows, and data architecture. Ensure documentation is up-to-date and accessible to relevant partners.
- Innovation: Stay current with industry trends and emerging technologies in data architecture, data modeling, and analytics. Find opportunities for innovation and continuous improvement.
Qualifications and Requirements:
- 5+ years of overall data-related experience with a strong emphasis on data engineering and modeling preferred.
- Experience modeling data for analytics and reporting in popular BI tools like Power BI, Tableau, QlikView, Looker, or others is required.
- Experience writing and troubleshooting SQL queries is required, and experience with other common data manipulation languages like Python is highly desirable.
- Experience performing EDA activities on both structured (tabular) and semi-structured (document) data, including assessing data quality and completeness.
- Experience creating and applying standardized data architecture documentation like data source definitions, source-to-target mappings, and entity relationship diagrams.
- Experience in building batch or real time ETL pipelines with an emphasis on data processing standardization and reusability.
- Experience architecting and delivering solutions in a cloud platform (Azure, AWS, and/or GCP) with an emphasis on data orchestration and enrichment. Experience on Azure is preferred.
- Experience with traditional RDBMS solutions like SQL Server, Oracle, Postgres, and/or MySQL is required.
- Experience with big data platforms like Azure Synapse, Databricks, Snowflake, Amazon Redshift, and/or Google BigQuery is preferred.
- Experience with NoSQL databases like Azure Cosmos, Amazon DynamoDB and/or MongoDB is desirable.
- Experience working with data scientists and machine learning engineers applying machine learning models is desirable.
- Experience with real time/near real time streaming platforms like Azure Event Hubs, Apache Kafka, and/or Amazon Kinesis is desirable.