Details
-
Type:
Improvement
-
Status: Resolved
-
Priority:
Major
-
Resolution: Duplicate
-
Affects Version/s: 0.9.0, 0.10.0
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
Description
Currently, sending records to partitioned kite datasets via flume requires the user to configure the HDFS file path by hand and match the partition strategy. From the examples:
tier1.sinks.sink-1.hdfs.path = /tmp/data/events/year=%{cdk.partition.year}/month=%{cdk.partition.month}/day=%{cdk.partition.day}/hour=%{cdk.partition.hour}/minute=%{cdk.partition.minute}
This leaves a lot up to the user and isn't clearly documented. Ideally, users would set a kite dataset sink, which handles partitioning.
Attachments
Issue Links
- relates to
-
KITE-255 Remove Kite's log4j appender
-
- Open
-