職位描述
Core Responsibilities (核心職責(zé))
1. Requirements Definition (需求定義)
Proactively participate in requirements gathering sessions with data analysts, translating business needs technical specifications.
2. Agile Engagement (敏捷協(xié)作)
Fully engage in all Scrum agile meetings – sprint planning, retrospectives, daily huddles.
3. Data Modeling & Architecture (數(shù)據(jù)建模與架構(gòu)設(shè)計)
Data Modeling & Architecture: Lead data modeling initiatives, designing optimal data warehouse schemas based on complex business requirements to ensure scalability, performance, analytical integrity.
4. Data Development & Ownership (數(shù)據(jù)開發(fā)與所有權(quán))
Data Development & Ownership: Design, develop, maintain high-performance ETL pipelines data integration solutions. Take full ownership of the end-to-end data flow, ensuring accuracy, efficiency, reliability from ingestion to final consumption.
5. Code Quality & Standards (代碼質(zhì)量與標(biāo)準(zhǔn))
Ensure all development standard practices are meticulously followed across the team. Play a key role in quality assurance by performing rigorous code reviews on data developers’ contributions, actively mentoring elevating team-wide coding standards.
6. Defect Resolution & Validation (缺陷解決與校驗(yàn))
Collaborate actively with team members to promptly resolve defects identified during the validation of converted data, upholding the highest data quality standards.
7. Documentation & Knowledge Management (文檔與知識管理)
Proactively maintain comprehensive ETL process documentation, fostering a strong culture of knowledge sharing operational transparency.
8. Process Improvement & Innovation (流程優(yōu)化與創(chuàng)新)
Continuously identify areas fimprovement broader data engineering practices, driving efficiency innovation.
任職資格
Qualifications (任職要求)
Educational Background:
- Bachelors degree above in Computer Science, Information System, a closely related quantitative field from a highly reputable institution.
Professional Experience:
- Minimum of 1+ year of highly relevant impactful ETL development, Data Warehousing, Data Engineering work experience.
- Priexperience within a fast-paced internet company, preferably in the OTA, e-commerce, similar high-data-volume sectors, is a significant advantage.
- Demonstrated experience with data modeling concepts their practical application.
Technical Proficiency:
- Proficient in modern big data technologies cloud data platforms, including Google BigQuery (strongly preferred), Hadoop, Spark, Hive, Kafka.
- Expertise in SQL is mandatory, along with strong scripting skills in languages like Python, Shell.
- Familiarity with LLM, AIGC, AgentFlow is a plus.
Problem-Solving & Responsibility:
- Exceptional debugging capabilities strong problem-solving skills, with a proven ability to independently manage resolve critical circumstances effectively.
- A profound sense of ownership accountability fthe quality reliability of data pipelines assets.
Communication & Collaboration:
- Excellent communication skills, fluent in English (both writtenal), capable of articulating complex technical topics to diverse audiences.
- A strong team player with an ability to show respect fdifferences in opinions foster a one-team spirit in a collaborative agile environment.
Growth Mindset & Drive:
- Quick learning capability under rapid changing environments, coupled with strong self-learning skills a passionate drive fcontinuous improvement.
- A proactive can-do attitude, eager to embrace new challenges technologies.
- Strong business sense analytical ability to translate data tangible business impact.
更新于 2026-01-10
查看更多崗位職責(zé)