Parsing big Data blobs where all related bytes aren't sequential #23
-
I'm parsing a big chunk of Reading the Any thoughts on parsing strategies? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
@ryanbooker |
Beta Was this translation helpful? Give feedback.
-
I'll need to find a better strategy than the naive approach anyway. At around 16 chained parsers ( Perhaps I need some intermediate chunks, and a second pass to create some cleaner more useful groupings/types. |
Beta Was this translation helpful? Give feedback.
-
I found the following higher order parser useful for this big messy data blob. Parsing it linearly and then sorting out what's what was a huge pain, but being able to logically group things by repeatedly parsing out just the relevant bits seems to work quite well. public extension Parser {
/// Returns a parser that runs a parser without consuming the input.
///
/// - Returns: A parser that runs the parser without consuming the input.
@inlinable
func nonConsuming() -> Parsers.NonConsuming<Self> {
.init(self)
}
}
public extension Parsers {
/// A parser that runs a parser without consuming the input.
struct NonConsuming<P>: Parser
where P: Parser {
public let p: P
@inlinable
public init(_ p: P) {
self.p = p
}
@inlinable
public func parse(_ input: inout P.Input) -> (P.Output)? {
let original = input
defer { input = original }
return p.parse(&input)
}
}
} e.g. Each parser operates on the entire blob let parser =
aParser.nonConsuming()
.bParser.nonConsuming()
.cParser.nonConsuming()
.map { a, b, c in // much more manageable } |
Beta Was this translation helpful? Give feedback.
I found the following higher order parser useful for this big messy data blob. Parsing it linearly and then sorting out what's what was a huge pain, but being able to logically group things by repeatedly parsing out just the relevant bits seems to work quite well.