Most people would use “word”, “half-word”, “quarter-word” etc, but the Anglophiles insist on “tuppit”, “ternary piece”, “span” and “chunk” (that’s 5 bits, or 12 old bits).
Maybe it was due to attempting the puzzles in real-time for the first time, but it felt like there was quite a spike in difficulty this year. Day 5 (If You Give A Seed A Fertilizer) in particular was pretty tough for an early puzzle.
Day 8 (Haunted Wasteland), Day 20 (Pulse Propagation) and Day 21 (Step Counter) were (I felt) a bit mean due to hidden properties of the input data.
I particularly liked Day 6 (Wait For It), Day 14 (Parabolic Reflector Dish) and Day 24 (Never Tell Me The Odds), although that one made my brain hurt.
Day 25 (Snowverload) had me reading research papers, although in the end I stumbled across Karger’s algorithm. That’s the first time I’ve used a probabilistic approach. This solution in particular was very clever.
I learned the Shoelace formula and Pick’s theorem this year, which will be very helpful to remember.
Oh, just like day 11! I hadn’t thought of that. I was initially about to try something similar by separating into rectangular regions, as in ear-clipping triangulation. But that would require a lot of iterating, and something about “polygon” and “walking the edges” went ping in my memory…
Some (not very insightful or helpful) observations:
The shortest path is likely to be mostly monotonic (it’s quite hard for the “long way round” to be cost-effective), so the Manhattan distance is probably a good metric.
The center of the puzzle is expensive, so the straight-line distance is probably not a good metric
I’m pretty sure that the shortest route (for part one at least) can’t self-intersect. Implementing this constraint is probably not going to speed things up, and there might be some pathological case where it’s not true.
Not an optimization, but I suspect that a heuristic-based “reasonably good” path such as a human would take will be fairly close to optimal.
Wowee, I took some wrong turns solving today’s puzzle! After fixing some really inefficient pruning I ended up with a Dijkstra search that runs in 2.971s (for a less-than-impressive 124.782 l-s).
Solutionimport Control.Monad import Data.Array.Unboxed (UArray) import qualified Data.Array.Unboxed as Array import Data.Char import qualified Data.HashSet as Set import qualified Data.PQueue.Prio.Min as PQ readInput :: String -> UArray (Int, Int) Int readInput s = let rows = lines s in Array.amap digitToInt . Array.listArray ((1, 1), (length rows, length $ head rows)) $ concat rows walk :: (Int, Int) -> UArray (Int, Int) Int -> Int walk (minStraight, maxStraight) grid = go Set.empty initPaths where initPaths = PQ.fromList [(0, ((1, 1), (d, 0))) | d <- [(0, 1), (1, 0)]] goal = snd $ Array.bounds grid go done paths = case PQ.minViewWithKey paths of Nothing -> error “no route” Just ((n, (p@(y, x), hist@((dy, dx), k))), rest) | p == goal && k >= minStraight -> n | (p, hist) Set.member done -> go done rest | otherwise -> let next = do h’@((dy’, dx’), _) <- join [ guard (k >= minStraight) >> [((dx, dy), 1), ((-dx, -dy), 1)], guard (k < maxStraight) >> [((dy, dx), k + 1)] ] let p’ = (y + dy’, x + dx’) guard $ Array.inRange (Array.bounds grid) p’ return (n + grid Array.! p’, (p’, h’)) in go (Set.insert (p, hist) done) $ (PQ.union rest . PQ.fromList) next main = do input <- readInput <$> readFile “input17” print $ walk (0, 3) input print $ walk (4, 10) input(edited for readability)
I’m not fluent in Rust, but is this something like the C++ placement new? Presumably just declaring a table of Vecs won’t automatically call the default constructor? (Sorry for my total ignorance – pointers to appropriate reading material appreciated)
Took a while to figure out what part 2 was all about. Didn’t have the energy to golf this one further today, so looking forward to seeing the other solutions!
Solution0.3 line-secondsimport Data.Char import Data.List import Data.List.Split import qualified Data.Vector as V hash :: String -> Int hash = foldl’ (a c -> ((a + ord c) * 17) rem 256) 0 hashmap :: [String] -> Int hashmap = focus . V.toList . foldl’ step (V.replicate 256 []) where focus = sum . zipWith focusBox [1 …] focusBox i = sum . zipWith (j (_, z) -> i * j * z) [1 …] . reverse step boxes s = let (label, op) = span isLetter s i = hash label in case op of [‘-’] -> V.accum (flip filter) boxes [(i, (/= label) . fst)] (‘=’ : z) -> V.accum replace boxes [(i, (label, read z))] replace ls (n, z) = case findIndex ((== n) . fst) ls of Just j -> let (a, _ : b) = splitAt j ls in a ++ (n, z) : b Nothing -> (n, z) : ls main = do input <- splitOn “,” . head . lines <$> readFile “input15” print $ sum . map hash $ input print $ hashmap input
A little slow (1.106s on my machine), but list operations made this really easy to write. I expect somebody more familiar with Haskell than me will be able to come up with a more elegant solution.
Nevertheless, 59th on the global leaderboard today! Woo!
Solutionimport Data.List import qualified Data.Map.Strict as Map import Data.Semigroup rotateL, rotateR, tiltW :: Endo [[Char]] rotateL = Endo $ reverse . transpose rotateR = Endo $ map reverse . transpose tiltW = Endo $ map tiltRow where tiltRow xs = let (a, b) = break (== ‘#’) xs (os, ds) = partition (== ‘O’) a rest = case b of (‘#’ : b’) -> ‘#’ : tiltRow b’ [] -> [] in os ++ ds ++ rest load rows = sum $ map rowLoad rows where rowLoad = sum . map (length rows -) . elemIndices ‘O’ lookupCycle xs i = let (o, p) = findCycle 0 Map.empty xs in xs !! if i < o then i else (i - o) rem p + o where findCycle i seen (x : xs) = case seen Map.!? x of Just j -> (j, i - j) Nothing -> findCycle (i + 1) (Map.insert x i seen) xs main = do input <- lines <$> readFile “input14” print . load . appEndo (tiltW <> rotateL) $ input print $ load $ lookupCycle (iterate (appEndo $ stimes 4 (rotateR <> tiltW)) $ appEndo rotateL input) 1000000000