Skip to content
This repository was archived by the owner on Jan 2, 2021. It is now read-only.

Residual #129

Open
wants to merge 60 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
fb0db99
Experiments with new convolutional architectures.
alexjc Apr 28, 2016
270524d
Integrated new model, works but doesn't have enough capacity?
alexjc May 2, 2016
a4713ab
Merge commit '23a99313f975a8c0a41e8d05ee448cc4e8c50411' into forward
alexjc May 4, 2016
1bcbf2c
Prototype of feed-forward architecture.
alexjc May 4, 2016
ffb7211
Re-introducing support for multiple phases, multiple iterations.
alexjc May 7, 2016
54070f9
Better default options. 2 phases, 2 iterations.
alexjc May 7, 2016
94004e2
Switching architectures and newly trained network, preparing integrat…
alexjc May 10, 2016
abaab87
Prototype incrementally propagating data down decoder to the output.
alexjc May 10, 2016
bead8a3
Layerwise decoding of the encoded + matched features, work in progress.
alexjc May 11, 2016
d77ed33
New balance parameter replaces both weights. Balance at 0.0 means ful…
alexjc May 11, 2016
7031555
Fix crash when a directory named "content" exists (#88)
longears May 12, 2016
419d752
Cleaning up multi-parameter implementation, supporting patch shapes.
alexjc May 12, 2016
9af43f6
Snap to grid depends on layers. Closes #87.
alexjc May 12, 2016
9c7125a
Reworking arguments to support nargs and lists, adding model configur…
alexjc May 12, 2016
5fbc79f
Support for network that goes all the way up to 6_1.
alexjc May 12, 2016
f3edeab
Layer-wise generation with micro-iterations to match patches.
alexjc May 13, 2016
9a188b8
Fully switching to a layerwise rather than iterative approach, removi…
alexjc May 13, 2016
72361f8
Simplifications to the iterations, better debug logging, reduce load …
alexjc May 13, 2016
9af96f5
Refined the logging, removed unused seeding code.
alexjc May 13, 2016
fbb318f
Major improvements to display, ASCII logo too! All code in forward br…
alexjc May 13, 2016
3354fdb
Merge commit 'b2fac92b27dea18cf1ab8438c981b3537fa9ad06' into forward
alexjc May 13, 2016
7c5374f
Message to download the new model file as uncompressed.
alexjc May 13, 2016
35daa9a
Re-implementing visualization of intermediate frames, 50% slower.
alexjc May 13, 2016
da1c3d6
The blending occurs every frame rather than between phases, fixing th…
alexjc May 14, 2016
fef8036
Tidying up the code, comments, docker build.
alexjc May 14, 2016
57a5e24
Re-ordered code, removed loss calculation, improved logging.
alexjc May 14, 2016
a9377f9
Updating Lasagne to it includes Deconv2D, adding sklearn to requireme…
alexjc May 15, 2016
c0f9c69
Fix for image resize by using PIL fit() rather than thumbnail.
alexjc May 15, 2016
5b7c28f
Updating examples in the README after testing.
alexjc May 16, 2016
e07a699
New photo example.
alexjc May 16, 2016
c784dd9
Replacing the balance parameter with `content-weight` and new paramet…
alexjc May 16, 2016
6358992
Minor tweaks to filenames, code layout, adding prototype content reno…
alexjc May 19, 2016
0d8cf11
Reworking layerwise code to allow for inter-layer iterations.
alexjc May 22, 2016
833889f
Separate processing of layers.
alexjc May 22, 2016
feebb85
Averaging the features from other layers before doing the next iterat…
alexjc May 22, 2016
8eddd55
Re-introducing logging and command-line arguments.
alexjc May 22, 2016
bc10b01
Support for semantic maps again.
alexjc May 22, 2016
c1ddad5
Support for arbitrary layer numbers in feature exchange code.
alexjc May 23, 2016
ac03bad
Re-ordering of the exchange/merge operations.
alexjc May 23, 2016
1f969b7
Switch to new network architecture, simplified layer handling with in…
alexjc May 29, 2016
f81446a
Support for micro-iterations within the macro-passes.
alexjc May 29, 2016
90f7db3
Integrated the feature merging code with the layer evaluation, can us…
alexjc May 29, 2016
288f3ac
Cleaned up default parameters for testing.
alexjc May 31, 2016
ec1d401
First prototype of patch-matching.
alexjc May 31, 2016
1c08f62
Improving patch-matching performance.
alexjc Jun 1, 2016
c4e6ceb
Slow version of 3x3 patch-matching.
alexjc Jun 3, 2016
99ec0a7
Minor performance improvements, major memory improvements. Sub-tensor…
alexjc Jun 3, 2016
f07a188
Increased iteration count for PatchMatch, new network size with poten…
alexjc Jun 9, 2016
3a0418e
Cleaning up patch-matching, stops at 10% improve threshold.
alexjc Jun 9, 2016
21da45f
Working layerwise generation with exchanges between layers as soon as…
alexjc Jun 12, 2016
8ca4787
Switch to JIT-compiled patch-matching.
alexjc Jun 13, 2016
3f81d50
Improved patch-matching using numba and JIT-compiled gu-functions.
alexjc Jun 14, 2016
ebe133e
Using previous layer's matched patches, restored display statistics.
alexjc Jun 14, 2016
4015ade
Normalization of content features and using previous pass for patch m…
alexjc Jun 14, 2016
b413689
Extracted patch score calculation, removed unused code.
alexjc Jun 14, 2016
3eb2fe5
Simplified model code, preparing for custom patch biases.
alexjc Jun 15, 2016
ac57d04
Experimental visualizations of the network.
alexjc Jun 16, 2016
6988155
Patch-variety experimental code using statistics, works in a single p…
alexjc Jun 18, 2016
f08c4d2
Removed content feature normalisation, prototype for matching style g…
alexjc Jun 19, 2016
fae15a7
Experiment with residual network. Features are not as well suited to …
alexjc Jul 3, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Extracted patch score calculation, removed unused code.
  • Loading branch information
alexjc committed Jun 14, 2016
commit b413689a561194c4c767b5129c6bd929d651aeb5
34 changes: 15 additions & 19 deletions doodle.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,13 +187,9 @@ def ConcatenateLayer(incoming, layer):
return ConcatLayer([incoming, net['map%i'%layer]]) if args.semantic_weight > 0.0 else incoming

# Auxiliary network for the semantic layers, and the nearest neighbors calculations.
self.pm_inputs, self.pm_buffers, self.pm_candidates = {}, {}, {}
for layer, upper, lower in zip(args.layers, [None] + args.layers[:-1], args.layers[1:] + [None]):
self.channels[layer] = net['enc%i_1'%layer].num_filters
net['sem%i'%layer] = ConcatenateLayer(net['enc%i_1'%layer], layer)
self.pm_inputs[layer] = T.ftensor4()
self.pm_buffers[layer] = T.ftensor4()
self.pm_candidates[layer] = T.itensor4()
self.network = net

def load_data(self):
Expand Down Expand Up @@ -246,16 +242,20 @@ def finalize_image(self, image, resolution):
# Fast Patch Matching
#----------------------------------------------------------------------------------------------------------------------

@numba.jit()
def patches_score(current, buffers, i0, i1, i2, b, a):
score = 0.0
for y, x in [(-1,-1),(-1,0),(-1,+1),(0,-1),(0,0),(0,+1),(+1,-1),(+1,0),(+1,+1)]:
score += np.sum(buffers[i0,:,i1+y,i2+x] * current[0,:,1+b+y,1+a+x])
return score

@numba.guvectorize([(numba.float32[:,:,:,:], numba.float32[:,:,:,:], numba.int32[:,:,:], numba.float32[:,:])],
'(n,c,x,y),(n,c,z,w),(a,b,i),(a,b)', nopython=True, target='parallel')
def patches_initialize(current, buffers, indices, scores):
for b in range(indices.shape[0]):
for a in range(indices.shape[1]):
i0, i1, i2 = indices[b,a]
score = 0.0
for y, x in [(-1,-1),(-1,0),(-1,+1),(0,-1),(0,0),(0,+1),(+1,-1),(+1,0),(+1,+1)]:
score += np.sum(buffers[i0,:,i1+y,i2+x] * current[0,:,1+b+y,1+a+x])
scores[b,a] = score
scores[b,a] = patches_score(current, buffers, i0, i1, i2, b, a)

@numba.guvectorize([(numba.float32[:,:,:,:], numba.float32[:,:,:,:], numba.int32[:,:,:], numba.float32[:,:], numba.float32[:])],
'(n,c,x,y),(n,c,z,w),(a,b,i),(a,b),()', nopython=True)
Expand All @@ -268,9 +268,7 @@ def patches_propagate(current, buffers, indices, scores, i):
- np.array(offset, dtype=np.int32)
i1 = min(buffers.shape[2]-2, max(i1, 1))
i2 = min(buffers.shape[3]-2, max(i2, 1))
score = 0.0
for y, x in [(-1,-1),(-1,0),(-1,+1),(0,-1),(0,0),(0,+1),(+1,-1),(+1,0),(+1,+1)]:
score += np.sum(buffers[i0,:,i1+y,i2+x] * current[0,:,1+b+y,1+a+x])
score = patches_score(current, buffers, i0, i1, i2, b, a)
if score > scores[b,a]:
scores[b,a] = score
indices[b,a] = np.array((i0, i1, i2), dtype=np.int32)
Expand All @@ -286,9 +284,7 @@ def patches_search(current, buffers, indices, scores, k):
# i2 = min(buffers.shape[3]-2, max(i2 + random.randint(-w, +w), 1))
i1 = np.random.randint(1, buffers.shape[2]-1)
i2 = np.random.randint(1, buffers.shape[3]-1)
score = 0.0
for y, x in [(-1,-1),(-1,0),(-1,+1),(0,-1),(0,0),(0,+1),(+1,-1),(+1,0),(+1,+1)]:
score += np.sum(buffers[i0,:,i1+y,i2+x] * current[0,:,1+b+y,1+a+x])
score = patches_score(current, buffers, i0, i1, i2, b, a)
if score > scores[b,a]:
scores[b,a] = score
indices[b,a] = np.array((i0, i1, i2), dtype=np.int32)
Expand Down Expand Up @@ -326,7 +322,7 @@ def __init__(self):
def rescale_image(self, img, scale):
"""Re-implementing skimage.transform.scale without the extra dependency. Saves a lot of space and hassle!
"""
output = scipy.misc.toimage(img, cmin=0.0, cmax=255)
output = scipy.misc.toimage(img, cmin=0.0, cmax=255.0)
return np.asarray(PIL.ImageOps.fit(output, [snap(dim*scale) for dim in output.size], PIL.Image.ANTIALIAS))

def load_images(self, name, filename, scale=1.0):
Expand Down Expand Up @@ -531,12 +527,12 @@ def evaluate_feature(self, layer, feature, variety=0.0):
better_feature = reconstruct_from_patches_2d(better_patches, better_shape)

flat_idx = np.sum(best_idx.reshape((-1,3)) * np.array([B.shape[1]*B.shape[2], B.shape[2], 1]), axis=(1))
used = 99. * len(set(flat_idx)) / flat_idx.shape[0]
duplicates = 99. * len([v for v in np.bincount(flat_idx) if v>1]) / len(set(flat_idx))
changed = 99. * (1.0 - np.where(indices == flat_idx)[0].shape[0] / flat_idx.shape[0])
used = 100.0 * len(set(flat_idx)) / flat_idx.shape[0]
duplicates = 100.0 * len([v for v in np.bincount(flat_idx) if v>1]) / len(set(flat_idx))
changed = 100.0 * (1.0 - np.where(indices == flat_idx)[0].shape[0] / flat_idx.shape[0])

err = best_val.mean()
print(' {}layer{} {:>1} {}patches{} used {:2.0f}% dups {:2.0f}% chgd {:2.0f}% {}error{} {:3.2e} {}time{} {:3.1f}s'\
print(' {}layer{} {:>1} {}patches{} used {:<3.0f}% dups {:<3.0f}% chgd {:<3.0f}% {}error{} {:3.2e} {}time{} {:3.1f}s'\
.format(ansi.BOLD, ansi.ENDC, layer, ansi.BOLD, ansi.ENDC, used, duplicates, changed,
ansi.BOLD, ansi.ENDC, err, ansi.BOLD, ansi.ENDC, time.time() - iter_time))

Expand Down