You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When message is read from kafka, the magic bytes are used to find the schema ID from schema registry, which is then used to fetch the actual schema.
If the actual schema is in the schema registry, why do we need to still pass the schema when creating Flink source?
In our case, we have large number of schemas, and would like to have a single source that can read all of them.
The text was updated successfully, but these errors were encountered:
In https://github.com/awslabs/aws-glue-schema-registry?tab=readme-ov-file#flink-kafka-consumer-with-avro-format
when flink needs to get generic Avro record, why do we need actual schema in
When message is read from kafka, the magic bytes are used to find the schema ID from schema registry, which is then used to fetch the actual schema.
If the actual schema is in the schema registry, why do we need to still pass the schema when creating Flink source?
In our case, we have large number of schemas, and would like to have a single source that can read all of them.
The text was updated successfully, but these errors were encountered: