diff --git a/README.md b/README.md index 47bbc2eb..e7b783fe 100755 --- a/README.md +++ b/README.md @@ -2,11 +2,12 @@
-Janie Clayton +Janie Larson -http://redqueengraphics.com +http://redqueencoder.com [@RedQueenCoder](https://twitter.com/RedQueenCoder) +[@RedQueenCoder@appdot.net](https://appdot.net/@RedQueenCoder) Brad Larson @@ -233,59 +234,243 @@ Operations are currently being ported over from GPUImage 2. Here are the ones th - *highlightTintColor*: Highlight tint RGB color (GPUVector4). Default: `{0.0f, 0.0f, 1.0f, 1.0f}` (blue). - *shadowTintIntensity*: Shadow tint intensity, from 0.0 to 1.0. Default: 0.0 - *highlightTintIntensity*: Highlight tint intensity, from 0.0 to 1.0, with 0.0 as the default. - - ### Blending modes ### - - **DissolveBlend**: Applies a dissolve blend of two images - - *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 0.5 as the default) +- **LookupFilter**: Uses an RGB color lookup image to remap the colors in an image. First, use your favourite photo editing application to apply a filter to lookup.png from framework/Operations/LookupImages. For this to work properly each pixel color must not depend on other pixels (e.g. blur will not work). If you need a more complex filter you can create as many lookup tables as required. Once ready, use your new lookup.png file as the basis of a PictureInput that you provide for the lookupImage property. + - *intensity*: The intensity of the applied effect, from 0.0 (stock image) to 1.0 (fully applied effect). + - *lookupImage*: The image to use as the lookup reference, in the form of a PictureInput. - - **MultiplyBlend**: Applies a multiply blend of two images +- **AmatorkaFilter**: A photo filter based on a Photoshop action by Amatorka: http://amatorka.deviantart.com/art/Amatorka-Action-2-121069631 . If you want to use this effect you have to add lookup_amatorka.png from the GPUImage framework/Operations/LookupImages folder to your application bundle. - - **AddBlend**: Applies an additive blend of two images +- **MissEtikateFilter**: A photo filter based on a Photoshop action by Miss Etikate: http://miss-etikate.deviantart.com/art/Photoshop-Action-15-120151961 . If you want to use this effect you have to add lookup_miss_etikate.png from the GPUImage framework/Operations/LookupImages folder to your application bundle. - - **SubtractBlend**: Applies a subtractive blend of two images +- **SoftElegance**: Another lookup-based color remapping filter. If you want to use this effect you have to add lookup_soft_elegance_1.png and lookup_soft_elegance_2.png from the GPUImage framework/Operations/LookupImages folder to your application bundle. - - **DivideBlend**: Applies a division blend of two images +- **ColorInversion**: Inverts the colors of an image - - **OverlayBlend**: Applies an overlay blend of two images +- **Luminance**: Reduces an image to just its luminance (greyscale). - - **DarkenBlend**: Blends two images by taking the minimum value of each color component between the images +- **MonochromeFilter**: Converts the image to a single-color version, based on the luminance of each pixel + - *intensity*: The degree to which the specific color replaces the normal image color (0.0 - 1.0, with 1.0 as the default) + - *color*: The color to use as the basis for the effect, with (0.6, 0.45, 0.3, 1.0) as the default. + +- **FalseColor**: Uses the luminance of the image to mix between two user-specified colors + - *firstColor*: The first and second colors specify what colors replace the dark and light areas of the image, respectively. The defaults are (0.0, 0.0, 0.5) amd (1.0, 0.0, 0.0). + - *secondColor*: + +- **Haze**: Used to add or remove haze (similar to a UV filter) + - *distance*: Strength of the color applied. Default 0. Values between -.3 and .3 are best. + - *slope*: Amount of color change. Default 0. Values between -.3 and .3 are best. + +- **SepiaToneFilter**: Simple sepia tone filter + - *intensity*: The degree to which the sepia tone replaces the normal image color (0.0 - 1.0, with 1.0 as the default) + +- **LuminanceThreshold**: Pixels with a luminance above the threshold will appear white, and those below will be black + - *threshold*: The luminance threshold, from 0.0 to 1.0, with a default of 0.5 + +- **AdaptiveThreshold**: Determines the local luminance around a pixel, then turns the pixel black if it is below that local luminance and white if above. This can be useful for picking out text under varying lighting conditions. + - *blurRadiusInPixels*: A multiplier for the background averaging blur radius in pixels, with a default of 4. + +- **ChromaKeying**: For a given color in the image, sets the alpha channel to 0. This is similar to the ChromaKeyBlend, only instead of blending in a second image for a matching color this doesn't take in a second image and just turns a given color transparent. + - *thresholdSensitivity*: How close a color match needs to exist to the target color to be replaced (default of 0.4) + - *smoothing*: How smoothly to blend for the color match (default of 0.1) + +- **Vibrance**: Adjusts the vibrance of an image + - *vibrance*: The vibrance adjustment to apply, using 0.0 as the default, and a suggested min/max of around -1.2 and 1.2, respectively. + +- **HighlightShadowTint**: Allows you to tint the shadows and highlights of an image independently using a color and intensity + - *shadowTintColor*: Shadow tint RGB color (GPUVector4). Default: `{1.0f, 0.0f, 0.0f, 1.0f}` (red). + - *highlightTintColor*: Highlight tint RGB color (GPUVector4). Default: `{0.0f, 0.0f, 1.0f, 1.0f}` (blue). + - *shadowTintIntensity*: Shadow tint intensity, from 0.0 to 1.0. Default: 0.0 + - *highlightTintIntensity*: Highlight tint intensity, from 0.0 to 1.0, with 0.0 as the default. + +### Image processing ### + +- **Sharpen**: Sharpens the image + - *sharpness*: The sharpness adjustment to apply (-4.0 - 4.0, with 0.0 as the default) + +- **GaussianBlur**: A hardware-optimized, variable-radius Gaussian blur + - *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 2.0. This adjusts the sigma variable in the Gaussian distribution function. + +- **BoxBlur**: A hardware-optimized, variable-radius box blur + - *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 2.0. This adjusts the box radius for the blur function. + +- **iOSBlur**: An attempt to replicate the background blur used on iOS 7 in places like the control center. + - *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 48.0. This adjusts the sigma variable in the Gaussian distribution function. + - *saturation*: Saturation ranges from 0.0 (fully desaturated) to 2.0 (max saturation), with 0.8 as the normal level + - *rangeReductionFactor*: The range to reduce the luminance of the image, defaulting to 0.6. + +- **MedianFilter**: Takes the median value of the three color components, over a 3x3 area + +- **TiltShift**: A simulated tilt shift lens effect + - *blurRadiusInPixels*: The radius of the underlying blur, in pixels. This is 7.0 by default. + - *topFocusLevel*: The normalized location of the top of the in-focus area in the image, this value should be lower than bottomFocusLevel, default 0.4 + - *bottomFocusLevel*: The normalized location of the bottom of the in-focus area in the image, this value should be higher than topFocusLevel, default 0.6 + - *focusFallOffRate*: The rate at which the image gets blurry away from the in-focus region, default 0.2 + +- **Convolution3x3**: Runs a 3x3 convolution kernel against the image + - *convolutionKernel*: The convolution kernel is a 3x3 matrix of values to apply to the pixel and its 8 surrounding pixels. The matrix is specified in row-major order, with the top left pixel being m11 and the bottom right m33. If the values in the matrix don't add up to 1.0, the image could be brightened or darkened. + +- **SobelEdgeDetection**: Sobel edge detection, with edges highlighted in white + - *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0. + +- **PrewittEdgeDetection**: Prewitt edge detection, with edges highlighted in white + - *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0. + +- **ThresholdSobelEdgeDetection**: Performs Sobel edge detection, but applies a threshold instead of giving gradual strength values + - *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0. + - *threshold*: Any edge above this threshold will be black, and anything below white. Ranges from 0.0 to 1.0, with 0.8 as the default + +- **LocalBinaryPattern**: This performs a comparison of intensity of the red channel of the 8 surrounding pixels and that of the central one, encoding the comparison results in a bit string that becomes this pixel intensity. The least-significant bit is the top-right comparison, going counterclockwise to end at the right comparison as the most significant bit. + +- **ColorLocalBinaryPattern**: This performs a comparison of intensity of all color channels of the 8 surrounding pixels and that of the central one, encoding the comparison results in a bit string that becomes each color channel's intensity. The least-significant bit is the top-right comparison, going counterclockwise to end at the right comparison as the most significant bit. + +- **LowPassFilter**: This applies a low pass filter to incoming video frames. This basically accumulates a weighted rolling average of previous frames with the current ones as they come in. This can be used to denoise video, add motion blur, or be used to create a high pass filter. + - *strength*: This controls the degree by which the previous accumulated frames are blended with the current one. This ranges from 0.0 to 1.0, with a default of 0.5. + +- **HighPassFilter**: This applies a high pass filter to incoming video frames. This is the inverse of the low pass filter, showing the difference between the current frame and the weighted rolling average of previous ones. This is most useful for motion detection. + - *strength*: This controls the degree by which the previous accumulated frames are blended and then subtracted from the current one. This ranges from 0.0 to 1.0, with a default of 0.5. + +- **ZoomBlur**: Applies a directional motion blur to an image + - *blurSize*: A multiplier for the blur size, ranging from 0.0 on up, with a default of 1.0 + - *blurCenter*: The normalized center of the blur. (0.5, 0.5) by default + +- **ColourFASTFeatureDetection**: Brings out the ColourFAST feature descriptors for an image + - *blurRadiusInPixels*: The underlying blur radius for the box blur. Default is 3.0. + +### Blending modes ### + +- **ChromaKeyBlend**: Selectively replaces a color in the first image with the second image + - *thresholdSensitivity*: How close a color match needs to exist to the target color to be replaced (default of 0.4) + - *smoothing*: How smoothly to blend for the color match (default of 0.1) + +- **DissolveBlend**: Applies a dissolve blend of two images + - *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 0.5 as the default) + +- **MultiplyBlend**: Applies a multiply blend of two images + +- **AddBlend**: Applies an additive blend of two images + +- **SubtractBlend**: Applies a subtractive blend of two images + +- **DivideBlend**: Applies a division blend of two images + +- **OverlayBlend**: Applies an overlay blend of two images + +- **DarkenBlend**: Blends two images by taking the minimum value of each color component between the images + +- **LightenBlend**: Blends two images by taking the maximum value of each color component between the images + +- **ColorBurnBlend**: Applies a color burn blend of two images + +- **ColorDodgeBlend**: Applies a color dodge blend of two images + +- **ScreenBlend**: Applies a screen blend of two images + +- **ExclusionBlend**: Applies an exclusion blend of two images + +- **DifferenceBlend**: Applies a difference blend of two images + +- **HardLightBlend**: Applies a hard light blend of two images + +- **SoftLightBlend**: Applies a soft light blend of two images + +- **AlphaBlend**: Blends the second image over the first, based on the second's alpha channel + - *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 1.0 as the default) + +- **SourceOverBlend**: Applies a source over blend of two images + +- **ColorBurnBlend**: Applies a color burn blend of two images + +- **ColorDodgeBlend**: Applies a color dodge blend of two images + +- **NormalBlend**: Applies a normal blend of two images + +- **ColorBlend**: Applies a color blend of two images + +- **HueBlend**: Applies a hue blend of two images + +- **SaturationBlend**: Applies a saturation blend of two images + +- **LuminosityBlend**: Applies a luminosity blend of two images + +- **LinearBurnBlend**: Applies a linear burn blend of two images + +### Visual effects ### + +- **Pixellate**: Applies a pixellation effect on an image or video + - *fractionalWidthOfAPixel*: How large the pixels are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05) - - **LightenBlend**: Blends two images by taking the maximum value of each color component between the images +- **PolarPixellate**: Applies a pixellation effect on an image or video, based on polar coordinates instead of Cartesian ones + - *center*: The center about which to apply the pixellation, defaulting to (0.5, 0.5) + - *pixelSize*: The fractional pixel size, split into width and height components. The default is (0.05, 0.05) - - **ColorBurnBlend**: Applies a color burn blend of two images +- **PolkaDot**: Breaks an image up into colored dots within a regular grid + - *fractionalWidthOfAPixel*: How large the dots are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05) + - *dotScaling*: What fraction of each grid space is taken up by a dot, from 0.0 to 1.0 with a default of 0.9. - - **ColorDodgeBlend**: Applies a color dodge blend of two images +- **Halftone**: Applies a halftone effect to an image, like news print + - *fractionalWidthOfAPixel*: How large the halftone dots are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05) - - **ScreenBlend**: Applies a screen blend of two images +- **Crosshatch**: This converts an image into a black-and-white crosshatch pattern + - *crossHatchSpacing*: The fractional width of the image to use as the spacing for the crosshatch. The default is 0.03. + - *lineWidth*: A relative width for the crosshatch lines. The default is 0.003. - - **ExclusionBlend**: Applies an exclusion blend of two images +- **SketchFilter**: Converts video to look like a sketch. This is just the Sobel edge detection filter with the colors inverted + - *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0. - - **DifferenceBlend**: Applies a difference blend of two images +- **ThresholdSketchFilter**: Same as the sketch filter, only the edges are thresholded instead of being grayscale + - *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0. + - *threshold*: Any edge above this threshold will be black, and anything below white. Ranges from 0.0 to 1.0, with 0.8 as the default - - **HardLightBlend**: Applies a hard light blend of two images +- **ToonFilter**: This uses Sobel edge detection to place a black border around objects, and then it quantizes the colors present in the image to give a cartoon-like quality to the image. + - *threshold*: The sensitivity of the edge detection, with lower values being more sensitive. Ranges from 0.0 to 1.0, with 0.2 as the default + - *quantizationLevels*: The number of color levels to represent in the final image. Default is 10.0 - - **SoftLightBlend**: Applies a soft light blend of two images +- **SmoothToonFilter**: This uses a similar process as the ToonFilter, only it precedes the toon effect with a Gaussian blur to smooth out noise. + - *blurRadiusInPixels*: The radius of the underlying Gaussian blur. The default is 2.0. + - *threshold*: The sensitivity of the edge detection, with lower values being more sensitive. Ranges from 0.0 to 1.0, with 0.2 as the default + - *quantizationLevels*: The number of color levels to represent in the final image. Default is 10.0 - - **AlphaBlend**: Blends the second image over the first, based on the second's alpha channel - - *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 1.0 as the default) +- **EmbossFilter**: Applies an embossing effect on the image + - *intensity*: The strength of the embossing, from 0.0 to 4.0, with 1.0 as the normal level - - **SourceOverBlend**: Applies a source over blend of two images +- **SwirlDistortion**: Creates a swirl distortion on the image + - *radius*: The radius from the center to apply the distortion, with a default of 0.5 + - *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to twist, with a default of (0.5, 0.5) + - *angle*: The amount of twist to apply to the image, with a default of 1.0 - - **ColorBurnBlend**: Applies a color burn blend of two images +- **BulgeDistortion**: Creates a bulge distortion on the image + - *radius*: The radius from the center to apply the distortion, with a default of 0.25 + - *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5) + - *scale*: The amount of distortion to apply, from -1.0 to 1.0, with a default of 0.5 - - **ColorDodgeBlend**: Applies a color dodge blend of two images +- **PinchDistortion**: Creates a pinch distortion of the image + - *radius*: The radius from the center to apply the distortion, with a default of 1.0 + - *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5) + - *scale*: The amount of distortion to apply, from -2.0 to 2.0, with a default of 1.0 - - **NormalBlend**: Applies a normal blend of two images +- **StretchDistortion**: Creates a stretch distortion of the image + - *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5) - - **ColorBlend**: Applies a color blend of two images +- **SphereRefraction**: Simulates the refraction through a glass sphere + - *center*: The center about which to apply the distortion, with a default of (0.5, 0.5) + - *radius*: The radius of the distortion, ranging from 0.0 to 1.0, with a default of 0.25 + - *refractiveIndex*: The index of refraction for the sphere, with a default of 0.71 - - **HueBlend**: Applies a hue blend of two images +- **GlassSphereRefraction**: Same as SphereRefraction, only the image is not inverted and there's a little bit of frosting at the edges of the glass + - *center*: The center about which to apply the distortion, with a default of (0.5, 0.5) + - *radius*: The radius of the distortion, ranging from 0.0 to 1.0, with a default of 0.25 + - *refractiveIndex*: The index of refraction for the sphere, with a default of 0.71 - - **SaturationBlend**: Applies a saturation blend of two images +- **Vignette**: Performs a vignetting effect, fading out the image at the edges + - *center*: The center for the vignette in tex coords (CGPoint), with a default of 0.5, 0.5 + - *color*: The color to use for the vignette (GPUVector3), with a default of black + - *start*: The normalized distance from the center where the vignette effect starts, with a default of 0.5 + - *end*: The normalized distance from the center where the vignette effect ends, with a default of 0.75 - - **LuminosityBlend**: Applies a luminosity blend of two images +- **KuwaharaRadius3Filter**: A modified version of the Kuwahara filter, optimized to work over just a radius of three pixels - - **LinearBurnBlend**: Applies a linear burn blend of two images +- **CGAColorspace**: Simulates the colorspace of a CGA monitor - +- **Solarize**: Applies a solarization effect + - *threshold*: Pixels with a luminance above the threshold will invert their color. Ranges from 0.0 to 1.0, with 0.5 as the default. diff --git a/Sources/GPUImage/Operations/AddBlend.metal b/Sources/GPUImage/Operations/AddBlend.metal index 1b8462b8..f8ade7c3 100644 --- a/Sources/GPUImage/Operations/AddBlend.metal +++ b/Sources/GPUImage/Operations/AddBlend.metal @@ -9,7 +9,7 @@ fragment half4 addBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half r; if (overlay.r * base.a + base.r * overlay.a >= overlay.a * base.a) { diff --git a/Sources/GPUImage/Operations/AlphaBlend.metal b/Sources/GPUImage/Operations/AlphaBlend.metal index 50461044..66ebc30a 100644 --- a/Sources/GPUImage/Operations/AlphaBlend.metal +++ b/Sources/GPUImage/Operations/AlphaBlend.metal @@ -15,7 +15,7 @@ fragment half4 alphaBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(mix(textureColor.rgb, textureColor2.rgb, textureColor2.a * half(uniform.mixturePercent)), textureColor.a); } diff --git a/Sources/GPUImage/Operations/ChromaKeyBlend.metal b/Sources/GPUImage/Operations/ChromaKeyBlend.metal index 1c501f5c..ffff9a6a 100644 --- a/Sources/GPUImage/Operations/ChromaKeyBlend.metal +++ b/Sources/GPUImage/Operations/ChromaKeyBlend.metal @@ -18,7 +18,7 @@ fragment half4 chromaKeyBlendFragment(TwoInputVertexIO fragmentInput [[stage_in] constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half maskY = 0.2989h * uniform.colorToReplace.r + 0.5866h * uniform.colorToReplace.g + 0.1145h * uniform.colorToReplace.b; half maskCr = 0.7132h * (uniform.colorToReplace.r - maskY); diff --git a/Sources/GPUImage/Operations/ColorBlend.metal b/Sources/GPUImage/Operations/ColorBlend.metal index 5031cb8c..16716514 100644 --- a/Sources/GPUImage/Operations/ColorBlend.metal +++ b/Sources/GPUImage/Operations/ColorBlend.metal @@ -10,7 +10,7 @@ fragment half4 colorBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(base.rgb * (1.0h - overlay.a) + setlum(overlay.rgb, lum(base.rgb)) * overlay.a, base.a); } diff --git a/Sources/GPUImage/Operations/ColorBurnBlend.metal b/Sources/GPUImage/Operations/ColorBurnBlend.metal index b4f0fba3..6c214e9d 100644 --- a/Sources/GPUImage/Operations/ColorBurnBlend.metal +++ b/Sources/GPUImage/Operations/ColorBurnBlend.metal @@ -9,7 +9,7 @@ fragment half4 colorBurnBlendFragment(TwoInputVertexIO fragmentInput [[stage_in] constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half4 whiteColor = half4(1.0); return whiteColor - (whiteColor - textureColor) / textureColor2; diff --git a/Sources/GPUImage/Operations/ColorDodgeBlend.metal b/Sources/GPUImage/Operations/ColorDodgeBlend.metal index 61aab301..08eaf091 100644 --- a/Sources/GPUImage/Operations/ColorDodgeBlend.metal +++ b/Sources/GPUImage/Operations/ColorDodgeBlend.metal @@ -9,7 +9,7 @@ fragment half4 colorDodgeBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half3 baseOverlayAlphaProduct = half3(overlay.a * base.a); half3 rightHandProduct = overlay.rgb * (1.0h - base.a) + base.rgb * (1.0h - overlay.a); diff --git a/Sources/GPUImage/Operations/DarkenBlend.metal b/Sources/GPUImage/Operations/DarkenBlend.metal index 1a282efd..ce7337bc 100644 --- a/Sources/GPUImage/Operations/DarkenBlend.metal +++ b/Sources/GPUImage/Operations/DarkenBlend.metal @@ -9,7 +9,7 @@ fragment half4 darkenBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(min(overlay.rgb * base.a, base.rgb * overlay.a) + overlay.rgb * (1.0h - base.a) + base.rgb * (1.0h - overlay.a), 1.0h); } diff --git a/Sources/GPUImage/Operations/DifferenceBlend.metal b/Sources/GPUImage/Operations/DifferenceBlend.metal index 26e2687a..8c1d575d 100644 --- a/Sources/GPUImage/Operations/DifferenceBlend.metal +++ b/Sources/GPUImage/Operations/DifferenceBlend.metal @@ -9,7 +9,7 @@ fragment half4 differenceBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(abs(textureColor2.rgb - textureColor.rgb), textureColor.a); } diff --git a/Sources/GPUImage/Operations/DissolveBlend.metal b/Sources/GPUImage/Operations/DissolveBlend.metal index 29f9c318..9cf75c37 100644 --- a/Sources/GPUImage/Operations/DissolveBlend.metal +++ b/Sources/GPUImage/Operations/DissolveBlend.metal @@ -15,7 +15,7 @@ fragment half4 dissolveBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]] constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return mix(textureColor, textureColor2, half(uniform.mixturePercent)); } diff --git a/Sources/GPUImage/Operations/DivideBlend.metal b/Sources/GPUImage/Operations/DivideBlend.metal index bce92664..b88533bf 100644 --- a/Sources/GPUImage/Operations/DivideBlend.metal +++ b/Sources/GPUImage/Operations/DivideBlend.metal @@ -9,7 +9,7 @@ fragment half4 divideBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half ra; if (overlay.a == 0.0h || ((base.r / overlay.r) > (base.a / overlay.a))) diff --git a/Sources/GPUImage/Operations/ExclusionBlend.metal b/Sources/GPUImage/Operations/ExclusionBlend.metal index dfed8793..3e627764 100644 --- a/Sources/GPUImage/Operations/ExclusionBlend.metal +++ b/Sources/GPUImage/Operations/ExclusionBlend.metal @@ -9,7 +9,7 @@ fragment half4 exclusionBlendFragment(TwoInputVertexIO fragmentInput [[stage_in] constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4((overlay.rgb * base.a + base.rgb * overlay.a - 2.0h * overlay.rgb * base.rgb) + overlay.rgb * (1.0h - base.a) + base.rgb * (1.0h - overlay.a), base.a); } diff --git a/Sources/GPUImage/Operations/HardLightBlend.metal b/Sources/GPUImage/Operations/HardLightBlend.metal index d57ee39f..03b51356 100644 --- a/Sources/GPUImage/Operations/HardLightBlend.metal +++ b/Sources/GPUImage/Operations/HardLightBlend.metal @@ -9,7 +9,7 @@ fragment half4 hardLightBlendFragment(TwoInputVertexIO fragmentInput [[stage_in] constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half ra; if (2.0h * overlay.r < overlay.a) { diff --git a/Sources/GPUImage/Operations/HueBlend.metal b/Sources/GPUImage/Operations/HueBlend.metal index 8478db5f..192e1302 100644 --- a/Sources/GPUImage/Operations/HueBlend.metal +++ b/Sources/GPUImage/Operations/HueBlend.metal @@ -12,7 +12,7 @@ fragment half4 hueBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(base.rgb * (1.0h - overlay.a) + setlum(setsat(overlay.rgb, sat(base.rgb)), lum(base.rgb)) * overlay.a, base.a); } diff --git a/Sources/GPUImage/Operations/LightenBlend.metal b/Sources/GPUImage/Operations/LightenBlend.metal index de2b5dbd..100c0c89 100644 --- a/Sources/GPUImage/Operations/LightenBlend.metal +++ b/Sources/GPUImage/Operations/LightenBlend.metal @@ -9,7 +9,7 @@ fragment half4 lightenBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return max(textureColor, textureColor2); } diff --git a/Sources/GPUImage/Operations/LinearBurnBlend.metal b/Sources/GPUImage/Operations/LinearBurnBlend.metal index e6097753..e48f78b4 100644 --- a/Sources/GPUImage/Operations/LinearBurnBlend.metal +++ b/Sources/GPUImage/Operations/LinearBurnBlend.metal @@ -9,7 +9,7 @@ fragment half4 linearBurnBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(clamp(textureColor.rgb + textureColor2.rgb - half3(1.0h), half3(0.0h), half3(1.0h)), textureColor.a); } diff --git a/Sources/GPUImage/Operations/LuminosityBlend.metal b/Sources/GPUImage/Operations/LuminosityBlend.metal index bcd55420..05bcc71d 100644 --- a/Sources/GPUImage/Operations/LuminosityBlend.metal +++ b/Sources/GPUImage/Operations/LuminosityBlend.metal @@ -10,7 +10,7 @@ fragment half4 luminosityBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(base.rgb * (1.0h - overlay.a) + setlum(base.rgb, lum(overlay.rgb)) * overlay.a, base.a); } diff --git a/Sources/GPUImage/Operations/MultiplyBlend.metal b/Sources/GPUImage/Operations/MultiplyBlend.metal index ac77400d..e854dc8e 100644 --- a/Sources/GPUImage/Operations/MultiplyBlend.metal +++ b/Sources/GPUImage/Operations/MultiplyBlend.metal @@ -9,7 +9,7 @@ fragment half4 multiplyBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]] constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return overlay * base + overlay * (1.0h - base.a) + base * (1.0h - overlay.a); } diff --git a/Sources/GPUImage/Operations/OverlayBlend.metal b/Sources/GPUImage/Operations/OverlayBlend.metal index 85505c46..a40426bf 100644 --- a/Sources/GPUImage/Operations/OverlayBlend.metal +++ b/Sources/GPUImage/Operations/OverlayBlend.metal @@ -9,7 +9,7 @@ fragment half4 overlayBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half ra; if (2.0h * base.r < base.a) { diff --git a/Sources/GPUImage/Operations/SaturationBlend.metal b/Sources/GPUImage/Operations/SaturationBlend.metal index 577d7296..ef1f73b0 100644 --- a/Sources/GPUImage/Operations/SaturationBlend.metal +++ b/Sources/GPUImage/Operations/SaturationBlend.metal @@ -10,7 +10,7 @@ fragment half4 saturationBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(base.rgb * (1.0h - overlay.a) + setlum(setsat(base.rgb, sat(overlay.rgb)), lum(base.rgb)) * overlay.a, base.a); } diff --git a/Sources/GPUImage/Operations/ScreenBlend.metal b/Sources/GPUImage/Operations/ScreenBlend.metal index 689f9bc6..4be309c9 100644 --- a/Sources/GPUImage/Operations/ScreenBlend.metal +++ b/Sources/GPUImage/Operations/ScreenBlend.metal @@ -9,7 +9,7 @@ fragment half4 screenBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]], constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half4 whiteColor = half4(1.0); return whiteColor - ((whiteColor - textureColor2) * (whiteColor - textureColor)); diff --git a/Sources/GPUImage/Operations/Sharpen.metal b/Sources/GPUImage/Operations/Sharpen.metal index 25cb56be..80214e4a 100644 --- a/Sources/GPUImage/Operations/Sharpen.metal +++ b/Sources/GPUImage/Operations/Sharpen.metal @@ -1,87 +1,18 @@ #include +#include "TexelSamplingTypes.h" using namespace metal; -typedef struct -{ - float4 position [[position]]; - - float2 textureCoordinate [[user(textureCoordinate)]]; - float2 leftTextureCoordinate [[user(leftTextureCoordinate)]]; - float2 rightTextureCoordinate [[user(rightTextureCoordinate)]]; - float2 topTextureCoordinate [[user(topTextureCoordinate)]]; - float2 bottomTextureCoordinate [[user(bottomTextureCoordinate)]]; -} SharpenVertexIO; - - - -vertex SharpenVertexIO sharpenVertex(const device packed_float2 *position [[buffer(0)]], - const device packed_float2 *textureCoordinate [[buffer(1)]], - uint vid [[vertex_id]]) -{ - SharpenVertexIO outputVertices; - - outputVertices.position = float4(position[vid], 0, 1.0); - - float2 widthStep = float2(1.0, 0.0); - float2 heightStep = float2(0.0, 1.0); - - outputVertices.textureCoordinate = textureCoordinate[vid]; - outputVertices.leftTextureCoordinate = textureCoordinate[vid] - widthStep; - outputVertices.rightTextureCoordinate = textureCoordinate[vid] + widthStep; - outputVertices.topTextureCoordinate = textureCoordinate[vid] + heightStep; - outputVertices.bottomTextureCoordinate = textureCoordinate[vid] - heightStep; - - return outputVertices; -} - - -// Vertex Shader -/* - attribute vec4 position; - attribute vec4 inputTextureCoordinate; - - uniform float texelWidth; - uniform float texelHeight; - uniform float sharpness; - - varying vec2 textureCoordinate; - varying vec2 leftTextureCoordinate; - varying vec2 rightTextureCoordinate; - varying vec2 topTextureCoordinate; - varying vec2 bottomTextureCoordinate; - - varying float centerMultiplier; - varying float edgeMultiplier; - - void main() - { - gl_Position = position; - - vec2 widthStep = vec2(texelWidth, 0.0); - vec2 heightStep = vec2(0.0, texelHeight); - - textureCoordinate = inputTextureCoordinate.xy; - leftTextureCoordinate = inputTextureCoordinate.xy - widthStep; - rightTextureCoordinate = inputTextureCoordinate.xy + widthStep; - topTextureCoordinate = inputTextureCoordinate.xy + heightStep; - bottomTextureCoordinate = inputTextureCoordinate.xy - heightStep; - - centerMultiplier = 1.0 + 4.0 * sharpness; - edgeMultiplier = sharpness; - } - - */ - typedef struct { float sharpness; } SharpenUniform; -fragment half4 sharpenFragment(SharpenVertexIO fragmentInput [[stage_in]], +fragment half4 sharpenFragment(NearbyTexelVertexIO fragmentInput [[stage_in]], texture2d inputTexture [[texture(0)]], constant SharpenUniform& uniform [[buffer(1)]]) { constexpr sampler quadSampler(coord::pixel); - half3 centerColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate).rgb; + half4 centerColorWithAlpha = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); + half3 centerColor = centerColorWithAlpha.rgb; half3 leftColor = inputTexture.sample(quadSampler, fragmentInput.leftTextureCoordinate).rgb; half3 rightColor = inputTexture.sample(quadSampler, fragmentInput.rightTextureCoordinate).rgb; half3 topColor = inputTexture.sample(quadSampler, fragmentInput.topTextureCoordinate).rgb; @@ -91,32 +22,7 @@ fragment half4 sharpenFragment(SharpenVertexIO fragmentInput [[stage_in]], half centerMultiplier = 1.0 + 4.0 * edgeMultiplier; return half4((centerColor * centerMultiplier - - (leftColor * edgeMultiplier + rightColor * edgeMultiplier+ topColor * edgeMultiplier + bottomColor * edgeMultiplier)), - inputTexture.sample(quadSampler, fragmentInput.bottomTextureCoordinate).w); + - (leftColor * edgeMultiplier + rightColor * edgeMultiplier + topColor * edgeMultiplier + bottomColor * edgeMultiplier)), + centerColorWithAlpha.a); } -// Fragment Shader -/* - varying vec2 textureCoordinate; - varying vec2 leftTextureCoordinate; - varying vec2 rightTextureCoordinate; - varying vec2 topTextureCoordinate; - varying vec2 bottomTextureCoordinate; - - varying float centerMultiplier; - varying float edgeMultiplier; - - uniform sampler2D inputImageTexture; - - void main() - { - vec3 textureColor = texture2D(inputImageTexture, textureCoordinate).rgb; - vec3 leftTextureColor = texture2D(inputImageTexture, leftTextureCoordinate).rgb; - vec3 rightTextureColor = texture2D(inputImageTexture, rightTextureCoordinate).rgb; - vec3 topTextureColor = texture2D(inputImageTexture, topTextureCoordinate).rgb; - vec3 bottomTextureColor = texture2D(inputImageTexture, bottomTextureCoordinate).rgb; - - gl_FragColor = vec4((textureColor * centerMultiplier - (leftTextureColor * edgeMultiplier + rightTextureColor * edgeMultiplier + topTextureColor * edgeMultiplier + bottomTextureColor * edgeMultiplier)), texture2D(inputImageTexture, bottomTextureCoordinate).w); - } - - */ diff --git a/Sources/GPUImage/Operations/Sharpen.swift b/Sources/GPUImage/Operations/Sharpen.swift index 05f04c2e..a870247b 100644 --- a/Sources/GPUImage/Operations/Sharpen.swift +++ b/Sources/GPUImage/Operations/Sharpen.swift @@ -1,20 +1,9 @@ -public class Sharpen: BasicOperation { +public class Sharpen: TextureSamplingOperation { public var sharpness: Float = 0.0 { didSet { uniformSettings["sharpness"] = sharpness } } - public var overriddenTexelSize: Size? public init() { - super.init( - vertexFunctionName: "sharpenVertex", fragmentFunctionName: "sharpenFragment", - numberOfInputs: 1) + super.init(fragmentFunctionName: "sharpenFragment") ({ sharpness = 0.0 })() } - - // Pretty sure this is OpenGL only - // override func configureFramebufferSpecificUniforms(_ inputFramebuffer:Framebuffer) { - // let outputRotation = overriddenOutputRotation ?? inputFramebuffer.orientation.rotationNeededForOrientation(.portrait) - // let texelSize = overriddenTexelSize ?? inputFramebuffer.texelSize(for:outputRotation) - // uniformSettings["texelWidth"] = texelSize.width - // uniformSettings["texelHeight"] = texelSize.height - // } } diff --git a/Sources/GPUImage/Operations/SoftLightBlend.metal b/Sources/GPUImage/Operations/SoftLightBlend.metal index ca31c72c..ae7845a5 100644 --- a/Sources/GPUImage/Operations/SoftLightBlend.metal +++ b/Sources/GPUImage/Operations/SoftLightBlend.metal @@ -9,7 +9,7 @@ fragment half4 softLightBlendFragment(TwoInputVertexIO fragmentInput [[stage_in] constexpr sampler quadSampler; half4 base = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 overlay = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 overlay = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); half alphaDivisor = base.a + step(base.a, 0.0h); // Protect against a divide-by-zero blacking out things in the output diff --git a/Sources/GPUImage/Operations/SourceOverBlend.metal b/Sources/GPUImage/Operations/SourceOverBlend.metal index f38b38c0..2539f07d 100644 --- a/Sources/GPUImage/Operations/SourceOverBlend.metal +++ b/Sources/GPUImage/Operations/SourceOverBlend.metal @@ -9,7 +9,7 @@ fragment half4 sourceOverBlendFragment(TwoInputVertexIO fragmentInput [[stage_in constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return mix(textureColor, textureColor2, textureColor2.a); } diff --git a/Sources/GPUImage/Operations/SubtractBlend.metal b/Sources/GPUImage/Operations/SubtractBlend.metal index 3c5ffe16..9bbb14af 100644 --- a/Sources/GPUImage/Operations/SubtractBlend.metal +++ b/Sources/GPUImage/Operations/SubtractBlend.metal @@ -9,7 +9,7 @@ fragment half4 subtractBlendFragment(TwoInputVertexIO fragmentInput [[stage_in]] constexpr sampler quadSampler; half4 textureColor = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate); constexpr sampler quadSampler2; - half4 textureColor2 = inputTexture2.sample(quadSampler, fragmentInput.textureCoordinate2); + half4 textureColor2 = inputTexture2.sample(quadSampler2, fragmentInput.textureCoordinate2); return half4(textureColor.rgb - textureColor2.rgb, textureColor.a); } diff --git a/examples/Mac/FilterShowcase/FilterShowcase/FilterOperations.swift b/examples/Mac/FilterShowcase/FilterShowcase/FilterOperations.swift index 6fae5a66..c827e000 100755 --- a/examples/Mac/FilterShowcase/FilterShowcase/FilterOperations.swift +++ b/examples/Mac/FilterShowcase/FilterShowcase/FilterOperations.swift @@ -111,16 +111,16 @@ let filterOperations: [FilterOperationInterface] = [ sliderUpdateCallback: nil, filterOperationType: .singleInput ), - // FilterOperation( - // filter:{Sharpen()}, - // listName:"Sharpen", - // titleName:"Sharpen", - // sliderConfiguration:.enabled(minimumValue:-1.0, maximumValue:4.0, initialValue:0.0), - // sliderUpdateCallback: {(filter, sliderValue) in - // filter.sharpness = sliderValue - // }, - // filterOperationType:.singleInput - // ), + FilterOperation( + filter: { Sharpen() }, + listName: "Sharpen", + titleName: "Sharpen", + sliderConfiguration: .enabled(minimumValue: -1.0, maximumValue: 4.0, initialValue: 0.0), + sliderUpdateCallback: { (filter, sliderValue) in + filter.sharpness = sliderValue + }, + filterOperationType: .singleInput + ), // FilterOperation( // filter:{UnsharpMask()}, // listName:"Unsharp mask",