[src/docs] Add xml documentation for types. (#20672)

Import all the xml documentation for types from https://github.com/xamarin/apple-api-docs.

Some of this documentation should probably be rewritten, and potentially moved
to conceptual documentation, in particular those that contain images (because
images can't be imported into xml documentation).

Note that the documentation hasn't been modified in any way; that's not the purpose of this PR. If documentation should be modified for whatever reason, it can be done in a later PR.

The xml documentation for members will come in a later PR.

Partial fix for https://github.com/xamarin/xamarin-macios/issues/17399.
This commit is contained in:
Rolf Bjarne Kvinge 2024-06-06 07:37:52 +02:00 коммит произвёл GitHub
Родитель 45b0e70bc0
Коммит 7670afb6aa
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
473 изменённых файлов: 23760 добавлений и 2146 удалений

Просмотреть файл

@ -0,0 +1,16 @@
<Documentation>
<Docs DocId="T:ARKit.ARSession">
<summary>Manages the camera capture, motion processing, and image analysis necessary to create a mixed-reality experience.</summary>
<remarks>
<para>An <see cref="T:ARKit.ARSession" /> object represents the system resources required for a mixed-reality experience. The <see cref="M:ARKit.ARSession.Run(ARKit.ARConfiguration,ARKit.ARSessionRunOptions)" /> method must be passed an <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=ARKit%20ARSession%20Configuration&amp;scope=Xamarin" title="T:ARKit.ARSessionConfiguration">T:ARKit.ARSessionConfiguration</a></format> object that controls specific ebhaviors. </para>
<para>Developers who use the <see cref="T:ARKit.ARSCNView" /> to present their AR imagery do not need to instantiate their own <see cref="T:ARKit.ARSession" /> object but instead should call <see cref="M:ARKit.ARSession.Run(ARKit.ARConfiguration,ARKit.ARSessionRunOptions)" /> on the <see cref="P:ARKit.ARSCNView.Session" /> property. For example:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var arView = new ARSCNView();
var arConfig = new ARWorldTrackingSessionConfiguration { PlaneDetection = ARPlaneDetection.Horizontal };
arView.Session.Run (arConfig);
]]></code>
</example>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,14 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAsset">
<summary>Base class for timed video and audio.</summary>
<remarks>
<para>An <see cref="T:AVFoundation.AVAsset" /> represents one or more media assets. These are held in its <see cref="P:AVFoundation.AVAsset.Tracks" /> property. Additionally, <see cref="T:AVFoundation.AVAsset" />s include metadata, track grouping, and preferences about the media.</para>
<para>Because media assets such as movies are large, instantiating an <see cref="T:AVFoundation.AVAsset" /> will not automatically load the file. Properties are loaded when they are queried or via explicit calls to <see cref="M:AVFoundation.AVAsset.LoadValuesTaskAsync(System.String[])" /> or <see cref="M:AVFoundation.AVAsset.LoadValuesAsynchronously(System.String[],System.Action)" />.</para>
<para>During playback, the current presentation state of an <see cref="T:AVFoundation.AVAsset" /> is represented by an <see cref="T:AVFoundation.AVPlayerItem" /> object, and the playback is controlled by a <see cref="T:AVFoundation.AVPlayer" />:</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AssetPlayerItemPlayer.png" alt="UML Class Diagram illustrating classes relating to AVAsset" />
</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAsset_Class/index.html">Apple documentation for <c>AVAsset</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,29 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel">
<summary>Enumerates attenuation models used by <see cref="T:AVFoundation.AVAudioEnvironmentDistanceAttenuationParameters" />.</summary>
<remarks>
<para>Graph of <c>Gain</c> as Distance ranges from 0 to 10 with: <c>ReferenceDistance = 5</c>, <c>RolloffFactor = 0.5</c>, and <c>MaximumDistance = 20</c></para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Exponential" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Exponential.png" alt="Graph of exponential attenuation">
</img>
</para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Inverse" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Inverse.png" alt="Graph of inverse attenuation">
</img>
</para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Linear" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Linear.png" alt="Graph of linear attenuation">
</img>
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,29 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioRecorder">
<summary>Audio recording class.</summary>
<remarks>
<para>
To create instances of this class use the factory method <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=AVFoundation%20AVRecorder%20To%20Url(%20Foundation%20NSUrl%20, %20AVFoundation%20AVAudio%20Recorder%20Settings%20,Foundation%20NSError%20)&amp;scope=Xamarin" title="M:AVFoundation.AVRecorder.ToUrl(Foundation.NSUrl, AVFoundation.AVAudioRecorderSettings,Foundation.NSError)">M:AVFoundation.AVRecorder.ToUrl(Foundation.NSUrl, AVFoundation.AVAudioRecorderSettings,Foundation.NSError)</a></format></para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var settings = new AVAudioRecorderSettings () {
AudioFormat = AudioFormatType.LinearPCM,
AudioQuality = AVAudioQuality.High,
SampleRate = 44100f,
NumberChannels = 1
};
var recorder = AVAudioRecorder.ToUrl (url, settings, out error);
if (recorder == null){
Console.WriteLine (error);
return;
}
recorder.PrepareToRecord ();
recorder.Record ();
]]></code>
</example>
</remarks>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Media/Sound/Play_Sound">Play Sound</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Media/Sound/Record_Sound">Record Sound</related>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioRecorder_ClassReference/index.html">Apple documentation for <c>AVAudioRecorder</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,67 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioSession">
<summary>Coordinates an audio playback or capture session.</summary>
<remarks>
<para> Application developers should use the singleton object
retrieved by <see cref="M:AVFoundation.AVAudioSession.SharedInstance" />.
</para>
<para>
Because the audio hardware of an iOS device is shared
between all apps, audio settings can only be "preferred" (see
<c>SetPreferred*</c> methods) and the application developer
must account for use-cases where these preferences are
overridden.
</para>
<para>
The interaction of an app with other apps and system
services is determined by your audio category. You can use the <see cref="M:AVFoundation.AVAudioSession.SetCategory(System.String,System.String,AVFoundation.AVAudioSessionRouteSharingPolicy,AVFoundation.AVAudioSessionCategoryOptions,Foundation.NSError@)" /> method to set this
</para>
<para>
You should also control the Mode (using <see cref="M:AVFoundation.AVAudioSession.SetMode(Foundation.NSString,Foundation.NSError@)" /> to
describe how your application will use audio.
</para>
<para>
As is common in AV Foundation, many methods in <see cref="T:AVFoundation.AVAudioSession" /> are
asynchronous and properties may take some time to reflect
their final status. Application developers should be familiar
with asynchronous programming techniques.
</para>
<para>
The <see cref="T:AVFoundation.AVAudioSession" />,
like the <see cref="T:AVFoundation.AVCaptureSession" /> and <see cref="T:AVFoundation.AVAssetExportSession" /> is a
coordinating object between some number of <see cref="P:AVFoundation.AVAudioSession.InputDataSources" />
and <see cref="P:AVFoundation.AVAudioSession.OutputDataSources" />.
</para>
<para>
You can register to a few notifications that are posted by the audio system, by using the convenience methods in <see cref="T:AVFoundation.AVAudioSession.Notifications" />.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
void Setup ()
{
AVAudioSession.SharedInstance ().Init ();
NSError error;
if (!AVAudioSession.SharedInstance ().SetCategory (AVAudioSessionCategory.Playback, out error)) {
ReportError (error);
return;
}
AVAudioSession.Notifications.ObserveInterruption (ToneInterruptionListener);
if (!AVAudioSession.SharedInstance ().SetActive (true, out error)) {
ReportError (error);
return;
}
void ToneInterruptionListener (object sender, AVAudioSessionInterruptionEventArgs interruptArgs)
{
//
}
}
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/index.html">Apple documentation for <c>AVAudioSession</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,11 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVCaptureConnection">
<summary>The link between capture input and capture output objects during a capture session.</summary>
<remarks>
<para>A <see cref="T:AVFoundation.AVCaptureConnection" /> encapsulates the link between an <see cref="T:AVFoundation.AVCaptureInput" /> (more specifically, between an individual <see cref="T:AVFoundation.AVCaptureInputPort" /> in the <see cref="P:AVFoundation.AVCaptureInput.Ports" /> property of the <see cref="T:AVFoundation.AVCaptureInput" /> and the <see cref="T:AVFoundation.AVCaptureOutput" />).</para>
<para>
<see cref="T:AVFoundation.AVCaptureConnection" />s are formed automatically when inputs and outputs are added via <see cref="M:AVFoundation.AVCaptureSession.AddInput(AVFoundation.AVCaptureInput)" /> and <see cref="M:AVFoundation.AVCaptureSession.AddOutput(AVFoundation.AVCaptureOutput)" />.</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureConnection_Class/index.html">Apple documentation for <c>AVCaptureConnection</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,185 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVCaptureSession">
<summary>Coordinates a recording session.</summary>
<remarks>
<para>
The AVCaptureSession object coordinates the recording of video
or audio input and passing the recorded information to one or
more output objects. As the iOS line has advanced, different devices have gained multiple capture devices (in particular, gained multiple cameras). Application developers can use <see cref="M:AVFoundation.AVCaptureDevice.DefaultDeviceWithMediaType(System.String)" /> or <see cref="M:AVFoundation.AVCaptureDevice.DevicesWithMediaType(System.String)" />, passing in the constants defined in <see cref="T:AVFoundation.AVMediaType" />.
</para>
<para>
Configuring capture consists of setting the <see cref="P:AVFoundation.AVCaptureSession.Inputs" /> and <see cref="P:AVFoundation.AVCaptureSession.Outputs" /> properties of the <see cref="T:AVFoundation.AVCaptureSession" />. Notice that multiple <see cref="T:AVFoundation.AVCaptureInput" />s and <see cref="T:AVFoundation.AVCaptureOutput" />s are possible. For instance, to capture both audio and video, one would use two capture inputs:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var session = new AVCaptureSession();
var camera = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
var mic = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Audio);
if(camera == null || mic == null){
throw new Exception("Can't find devices");
}
var cameraInput = AVCaptureDeviceInput.FromDevice (camera);
//info.plist _must_ contain NSMicrophoneUsageDescription key
var micInput = AVCaptureDeviceInput.FromDevice (mic);
if(session.CanAddInput(cameraInput)){
session.AddInput(cameraInput);
}
if(session.CanAddInput(micInput)){
session.AddInput(micInput);
}
]]></code>
</example>
<para>Note that permission to access the microphone (and in some regions, the camera) must be given by the user, requiring the developer to add the <c>NSMicrophoneUsageDescription</c> to the application's info.plist file.</para>
<para>Video can be captured directly to file with <see cref="T:AVFoundation.AVCaptureMovieFileOutput" />. However, this class has no display-able data and cannot be used simultaneously with <see cref="T:AVFoundation.AVCaptureVideoDataOutput" />. Instead, application developers can use it in combination with a <see cref="T:AVFoundation.AVCaptureVideoPreviewLayer" />, as shown in the following example:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var layer = new AVCaptureVideoPreviewLayer (session);
layer.VideoGravity = AVLayerVideoGravity.ResizeAspectFill;
var cameraView = new UIView ();
cameraView.Layer.AddSublayer (layer);
var filePath = Path.Combine (Path.GetTempPath (), "temporary.mov");
var fileUrl = NSUrl.FromFilename (filePath);
var movieFileOutput = new AVCaptureMovieFileOutput ();
var recordingDelegate = new MyRecordingDelegate ();
session.AddOutput (movieFileOutput);
movieFileOutput.StartRecordingToOutputFile (fileUrl, recordingDelegate);
]]></code>
</example>
<para>Application developers should note that the function <see cref="M:AVFoundation.AVCaptureFileOutput.StopRecording" /> is asynchronous; developers should wait until the <see cref="M:AVFoundation.AVCaptureFileOutputRecordingDelegate.FinishedRecording(AVFoundation.AVCaptureFileOutput,Foundation.NSUrl,Foundation.NSObject[],Foundation.NSError)" /> delegate method before manipulating the file (for instance, before saving it to the Photos album with <see cref="M:UIKit.UIVideo.SaveToPhotosAlbum(System.String,UIKit.UIVideo.SaveStatus)" /> or <see cref="M:AssetsLibrary.ALAssetsLibrary.WriteVideoToSavedPhotosAlbumAsync(Foundation.NSUrl)" />).</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
public class MyRecordingDelegate : AVCaptureFileOutputRecordingDelegate
{
public override void FinishedRecording (AVCaptureFileOutput captureOutput, NSUrl outputFileUrl, NSObject [] connections, NSError error)
{
if (UIVideo.IsCompatibleWithSavedPhotosAlbum (outputFileUrl.Path))
{
var library = new ALAssetsLibrary ();
library.WriteVideoToSavedPhotosAlbum (outputFileUrl, (path, e2) =>
{
if (e2 != null)
{
new UIAlertView ("Error", e2.ToString (), null, "OK", null).Show ();
}
else
{
new UIAlertView ("Saved", "Saved to Photos", null, "OK", null).Show ();
File.Delete (outputFileUrl.Path);
}
});
}
else
{
new UIAlertView ("Incompatible", "Incompatible", null, "OK", null).Show ();
}
}
} ]]></code>
</example>
<para>
Application developers can configure one or more output ports for the
captured data, and these can be still frames, video frames
with timing information, audio samples, quicktime movie files, or can be rendered directly to a CoreAnimation layer.
</para>
<para>
Once the input and output components of
the session are set, the actual processing is begun by calling the
<see cref="M:AVFoundation.AVCaptureSession.StartRunning" />
method.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
void SetupCapture ()
/ configure the capture session for low resolution, change this if your code
// can cope with more data or volume
session = new AVCaptureSession () {
SessionPreset = AVCaptureSession.PresetMedium
};
// create a device input and attach it to the session
var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice (captureDevice);
if (input == null){
Console.WriteLine ("No video input device");
return false;
}
session.AddInput (input);
// create a VideoDataOutput and add it to the sesion
var output = new AVCaptureVideoDataOutput () {
VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA),
// If you want to cap the frame rate at a given speed, in this sample: 15 frames per second
MinFrameDuration = new CMTime (1, 15)
};
// configure the output
queue = new MonoTouch.CoreFoundation.DispatchQueue ("myQueue");
outputRecorder = new OutputRecorder ();
output.SetSampleBufferDelegateAndQueue (outputRecorder, queue);
session.AddOutput (output);
session.StartRunning ();
}
public class OutputRecorder : AVCaptureVideoDataOutputSampleBufferDelegate {
public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
try {
var image = ImageFromSampleBuffer (sampleBuffer);
// Do something with the image, we just stuff it in our main view.
AppDelegate.ImageView.BeginInvokeOnMainThread (delegate {
AppDelegate.ImageView.Image = image;
});
//
// Although this looks innocent "Oh, he is just optimizing this case away"
// this is incredibly important to call on this callback, because the AVFoundation
// has a fixed number of buffers and if it runs out of free buffers, it will stop
// delivering frames.
//
sampleBuffer.Dispose ();
} catch (Exception e){
Console.WriteLine (e);
}
}
UIImage ImageFromSampleBuffer (CMSampleBuffer sampleBuffer)
{
// Get the CoreVideo image
using (var pixelBuffer = sampleBuffer.GetImageBuffer () as CVPixelBuffer){
// Lock the base address
pixelBuffer.Lock (0);
// Get the number of bytes per row for the pixel buffer
var baseAddress = pixelBuffer.BaseAddress;
int bytesPerRow = pixelBuffer.BytesPerRow;
int width = pixelBuffer.Width;
int height = pixelBuffer.Height;
var flags = CGBitmapFlags.PremultipliedFirst | CGBitmapFlags.ByteOrder32Little;
// Create a CGImage on the RGB colorspace from the configured parameter above
using (var cs = CGColorSpace.CreateDeviceRGB ())
using (var context = new CGBitmapContext (baseAddress,width, height, 8, bytesPerRow, cs, (CGImageAlphaInfo) flags))
using (var cgImage = context.ToImage ()){
pixelBuffer.Unlock (0);
return UIImage.FromImage (cgImage);
}
}
}
}
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureSession_Class/index.html">Apple documentation for <c>AVCaptureSession</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,46 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVMetadataItem">
<summary>An immutable item of metadata for an <see cref="T:AVFoundation.AVAsset" />.</summary>
<remarks>
<para>Metadata uses the concept of a "Key Space" that is a group of related identifiers, something like a programmatic namespace. These key spaces are defined in <see cref="T:AVFoundation.AVMetadata" />:</para>
<list type="table">
<listheader>
<term>Key Space</term>
<description>Constant</description>
</listheader>
<item>
<term>Common (contains standard version of most keys in most key spaces)</term>
<description>
<see cref="P:AVFoundation.AVMetadata.KeySpaceCommon" />
</description>
</item>
<item>
<term>ID 3</term>
<description>
<see cref="P:AVFoundation.AVMetadata.KeySpaceID3" />
</description>
</item>
<item>
<term>iTunes</term>
<description>
<see cref="P:AVFoundation.AVMetadata.KeySpaceiTunes" />
</description>
</item>
<item>
<term>QuickTime User Data</term>
<description>
<see cref="P:AVFoundation.AVMetadata.KeySpaceQuickTimeUserData" />
</description>
</item>
<item>
<term>QuickTime Metadata</term>
<description>
<see cref="P:AVFoundation.AVMetadata.KeySpaceQuickTimeMetadata" />
</description>
</item>
</list>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVMetadataItem_Class/index.html">Apple documentation for <c>AVMetadataItem</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,85 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVMetadataMachineReadableCodeObject">
<summary>A <see cref="T:AVFoundation.AVMetadataObject" /> that contains barcode information.</summary>
<remarks>
<para>The following barcode types can be natively recognized in iOS 7:</para>
<list type="table">
<listheader>
<term>Barcode type</term>
<description>Constant for use with <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=AVFoundation%20AVMetadata%20Output%20Metadata%20Object%20Type&amp;scope=Xamarin" title="P:AVFoundation.AVMetadataOutput.MetadataObjectType">P:AVFoundation.AVMetadataOutput.MetadataObjectType</a></format></description>
</listheader>
<item>
<term>Aztec</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeAztecCode" />
</description>
</item>
<item>
<term>Code 39</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeCode39Code" />
</description>
</item>
<item>
<term>Code 39 mod 43</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeCode39Mod43Code" />
</description>
</item>
<item>
<term>Code 93</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeCode93Code" />
</description>
</item>
<item>
<term>Code 128</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeCode128Code" />
</description>
</item>
<item>
<term>PDF417</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypePDF417Code" />
</description>
</item>
<item>
<term>QR</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeQRCode" />
</description>
</item>
<item>
<term>UPC-E</term>
<description>
<see cref="P:AVFoundation.AVMetadataObject.TypeUPCECode" />
</description>
</item>
</list>
<para>To recognize a barcode, application developers assign to the <see cref="P:AVFoundation.AVCaptureMetadataOutput.MetadataObjectTypes" /> property. This must be done after the <see cref="T:AVFoundation.AVCaptureMetadataOutput" /> has been added to the <see cref="P:AVFoundation.AVCaptureSession.Outputs" /> of a <see cref="T:AVFoundation.AVCaptureSession" /> object, as shown in the following code:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
session = new AVCaptureSession();
var camera = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice(camera);
//Add the metadata output channel
metadataOutput = new AVCaptureMetadataOutput();
var metadataDelegate = new MyMetadataOutputDelegate();
metadataOutput.SetDelegate(metadataDelegate, DispatchQueue.MainQueue);
session.AddOutput(metadataOutput);
//Confusing! *After* adding to session, tell output what to recognize...
metadataOutput.MetadataObjectTypes = new NSString[] {
AVMetadataObject.TypeQRCode,
AVMetadataObject.TypeEAN13Code
};
session.StartRunning();
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVMetadataMachineReadableCodeObject_Class/index.html">Apple documentation for <c>AVMetadataMachineReadableCodeObject</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,24 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVPlayer">
<summary>Encapsulates the control and UI of a component that plays back single or multiple items.</summary>
<remarks>
<para>An <see cref="T:AVFoundation.AVPlayer" /> links the visual presentation, control, and dynamic state of one or more <see cref="T:AVFoundation.AVAsset" />s.</para>
<para>The visual element of the display is done by a <see cref="T:AVFoundation.AVPlayerLayer" />, while it's current state (current time, etc.) is held in an <see cref="T:AVFoundation.AVPlayerItem" />, which in turn references an <see cref="T:AVFoundation.AVAsset" />.</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AssetPlayerItemPlayer.png" alt="Class diagram showing the important classes related to AssetPlayerItemPlayer" />
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var asset = AVAsset.FromUrl(NSUrl.FromFilename("sample.m4v"));
var playerItem = new AVPlayerItem(asset);
var player = new AVPlayer(playerItem);
var playerLayer = AVPlayerLayer.FromPlayer(player);
View.Layer.AddSublayer(playerLayer);
player.Play();
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayer_Class/index.html">Apple documentation for <c>AVPlayer</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,24 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVPlayerLayer">
<summary>A type of <see cref="T:CoreAnimation.CALayer" /> on which a <see cref="T:AVFoundation.AVPlayer" /> renders its output.</summary>
<remarks>
<para>
<see cref="T:AVFoundation.AVPlayerLayer" /> objects may be used for AV playback in a variety of situations where a heavier-weight component such as <see cref="T:UIKit.UIWebView" /> would be unnecessary or unavailable (such as on tvOS).</para>
<para>The following example shows how a <see cref="T:AVFoundation.AVPlayerLayer" /> can be used to stream a Web video:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var src = NSUrl.FromString("https://somevideo");
var asset = AVAsset.FromUrl (src);
var playerItem = new AVPlayerItem (asset);
var player = new AVPlayer (playerItem);
var playerLayer = AVPlayerLayer.FromPlayer (player);
var frame = new CGRect (0, 0, this.View.Frame.Width, this.View.Frame.Height);
playerLayer.Frame = frame;
this.View.Layer.AddSublayer (playerLayer);
player.Play ();
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayerLayer_Class/index.html">Apple documentation for <c>AVPlayerLayer</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,19 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVSpeechSynthesizer">
<summary>Synthesizes speech and raises events relating to text-to-speech.</summary>
<remarks>
<para>In its simplest form, text-to-speech can be done with just two classes:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var ss = new AVSpeechSynthesizer();
var su = new AVSpeechUtterance("Microphone check. One, two, one two.") {
Rate = 0.25f
};
ss.SpeakUtterance(su);
]]></code>
</example>
<para>The <see cref="T:AVFoundation.AVSpeechSynthesizer" /> maintains an internal queue of <see cref="T:AVFoundation.AVSpeechUtterance" />s. The queue is not accessible to application developers, but the synthesizer can be paused or stopped with <see cref="M:AVFoundation.AVSpeechSynthesizer.PauseSpeaking(AVFoundation.AVSpeechBoundary)" /> and <see cref="M:AVFoundation.AVSpeechSynthesizer.StopSpeaking(AVFoundation.AVSpeechBoundary)" />. Events such as <see cref="E:AVFoundation.AVSpeechSynthesizer.DidStartSpeechUtterance" /> or <see cref="E:AVFoundation.AVSpeechSynthesizer.WillSpeakRangeOfSpeechString" /> are opportunities for the application developer to modify previously-enqueued sequences.</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVSpeechSynthesizer_Ref/index.html">Apple documentation for <c>AVSpeechSynthesizer</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,10 @@
<Documentation>
<Docs DocId="T:AVFoundation.IAVPlayerItemMetadataCollectorPushDelegate">
<summary>Interface representing the required methods (if any) of the protocol <see cref="T:AVFoundation.AVPlayerItemMetadataCollectorPushDelegate" />.</summary>
<remarks>
<para>This interface contains the required methods (if any) from the protocol defined by <see cref="T:AVFoundation.AVPlayerItemMetadataCollectorPushDelegate" />.</para>
<para>If developers create classes that implement this interface, the implementation methods will automatically be exported to Objective-C with the matching signature from the method defined in the <see cref="T:AVFoundation.AVPlayerItemMetadataCollectorPushDelegate" /> protocol.</para>
<para>Optional methods (if any) are provided by the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=AVFoundation%20AVPlayer%20Item%20Metadata%20Collector%20Push%20Delegate_%20Extensions&amp;scope=Xamarin" title="T:AVFoundation.AVPlayerItemMetadataCollectorPushDelegate_Extensions">T:AVFoundation.AVPlayerItemMetadataCollectorPushDelegate_Extensions</a></format> class as extension methods to the interface, allowing developers to invoke any optional methods on the protocol.</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,17 @@
<Documentation>
<Docs DocId="T:AVKit.AVPlayerViewController">
<summary>A <see cref="T:UIKit.UIViewController" /> that provides a system-standard AV controller user experience.</summary>
<remarks>
<para>The <see cref="T:AVKit.AVPlayerViewController" /> provides a simple way of displaying a player (<format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=T:AVKit.AVPlayer&amp;scope=Xamarin" title="T:AVKit.AVPlayer">T:AVKit.AVPlayer</a></format>) with some standard controls.</para>
<para>
</para>
<para>Applications can set the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=T:AVKit.AVPlayer&amp;scope=Xamarin" title="T:AVKit.AVPlayer">T:AVKit.AVPlayer</a></format> by setting the <see cref="P:AVKit.AVPlayerViewController.Player" /> property.</para>
<para>
</para>
<para>Applications can customize the user interface by adding their own user interface elements to the <see cref="P:AVKit.AVPlayerViewController.ContentOverlayView" />.</para>
<para>
</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayerViewController_Class/index.html">Apple documentation for <c>AVPlayerViewController</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,21 @@
<Documentation>
<Docs DocId="T:Accounts.ACAccount">
<summary>Represents a user account stored in the Accounts database.</summary>
<remarks>
<para>iOS stores account information for certain social-network accounts in a system-wide database. A <see cref="T:Accounts.ACAccount" /> represents a single account.</para>
<para>Accounts can either be retrieved based on account-type or via a known identifier (see <see cref="P:Accounts.ACAccount.Identifier" />): </para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var store = new ACAccountStore();
var type = store.FindAccountType(ACAccountType.Twitter);
var accounts = store.FindAccounts(type));
if(accounts != null && accounts.Count() > 0){...}
var account = store.FindAccount("21A78660-FFFF-FFFF-FFFF-027EB7E3FF5F");
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/Accounts/Reference/ACAccountClassRef/index.html">Apple documentation for <c>ACAccount</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,39 @@
<Documentation>
<Docs DocId="T:Accounts.ACAccountStore">
<summary>Encapsulates the Accounts database, providing access to <see cref="T:Accounts.ACAccount" /> objects.</summary>
<remarks>
<para>The Accounts database on iOS provides account information for social networks. The social networks stored in the database are:</para>
<list type="bullet">
<item>
<term>
<see cref="P:Accounts.ACAccountType.Facebook" />
</term>
</item>
<item>
<term>
<see cref="P:Accounts.ACAccountType.SinaWeibo" />
</term>
</item>
<item>
<term>
<see cref="P:Accounts.ACAccountType.Twitter" />
</term>
</item>
</list>
<para>
The following example shows the basic structure of code accessing the <see cref="T:Accounts.ACAccountStore" /> object, requesting access to a specific account, and retrieving credentials:
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var store = new ACAccountStore();
var type = store.FindAccountType(ACAccountType.Twitter);
store.RequestAccess(type, null, (granted, error) => {
if(granted){
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/Accounts/Reference/ACAccountStoreClassRef/index.html">Apple documentation for <c>ACAccountStore</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,280 @@
<Documentation>
<Docs DocId="T:AddressBookUI.ABPeoplePickerNavigationController">
<summary>A <see cref="T:UIKit.UINavigationController" /> that allows the application user to select a contact or contact information from an <see cref="T:AddressBook.ABAddressBook" />.</summary>
<remarks>
<para>In iOS 8 and later bringing up a people-picker navigtion controller does not require the app to have access to a users contacts, and the user will not be prompted to grant access. If the app does not itself have access to the users contacts, a temporary copy of the contact selected by the user will be returned to the app.</para>
<para>
<format type="text/html">
<span>See a <a href="https://github.com/xamarin/monotouch-samples/tree/master/ios8/PeoplePicker">sample project</a> illustrating the use of a people-picker navigation controller.</span>
</format>
</para>
<para>
With the introduction of iOS 8.0, it is possible to filter the
information displayed by setting one or more of the Predicate
properties in this class.
</para>
<para>
The predicates are: <see cref="P:AddressBookUI.ABPeoplePickerNavigationController.PredicateForEnablingPerson" />,
<see cref="P:AddressBookUI.ABPeoplePickerNavigationController.PredicateForSelectionOfPerson" />
and <see cref="P:AddressBookUI.ABPeoplePickerNavigationController.PredicateForSelectionOfProperty" />.
</para>
<para>
The predicates can use the various members in <see cref="T:AddressBookUI.ABPersonPredicateKey" /> as
constants in the above predicates, or you can use the hardcoded strings shown below.
</para>
<para>
Each one of these properties has either a native type (like a
string), an array of elements or is a structured type that
contains elements that you can access from the predicate
expression.
</para>
<list type="table">
<listheader>
<term>Structured Type Name</term>
<description>Property Contents</description>
<description>
</description>
</listheader>
<item>
<term>LabeledValue</term>
<description>'label' and 'value'.</description>
</item>
<item>
<term>PhoneNumber</term>
<description>'stringValue', 'countryCode', 'formattedStringValue' and 'normalizedStringValue'</description>
</item>
<item>
<term>InstantMessageAddress</term>
<description>'username' and 'service'</description>
</item>
<item>
<term>SocialProfile</term>
<description>'username' and 'service'</description>
</item>
<item>
<term>PostalAddress</term>
<description>'street' property, 'subLocality' property, 'city' property, 'subAdministrativeArea' property, 'state' property, 'postalCode' property, 'country/region' and 'ISOCountryCode'.</description>
</item>
</list>
<list type="table">
<listheader>
<term>ABPersonPredicateKey</term>
<description>String Name</description>
<description>Key value</description>
</listheader>
<item>
<term>NamePrefix</term>
<description>"namePrefix"</description>
<description>string</description>
</item>
<item>
<term>GivenName</term>
<description>"givenName"</description>
<description>string</description>
</item>
<item>
<term>MiddleName</term>
<description>"middleName"</description>
<description>string</description>
</item>
<item>
<term>FamilyName</term>
<description>"familyName"</description>
<description>string</description>
</item>
<item>
<term>NameSuffix</term>
<description>"nameSuffix"</description>
<description>string</description>
</item>
<item>
<term>PreviousFamilyName</term>
<description>"previousFamilyName"</description>
<description>string</description>
</item>
<item>
<term>Nickname</term>
<description>"nickname"</description>
<description>string</description>
</item>
<item>
<term>PhoneticGivenName</term>
<description>"phoneticGivenName"</description>
<description>string</description>
</item>
<item>
<term>PhoneticMiddleName</term>
<description>"phoneticMiddleName"</description>
<description>string</description>
</item>
<item>
<term>PhoneticFamilyName</term>
<description>"phoneticFamilyName"</description>
<description>string</description>
</item>
<item>
<term>OrganizationName</term>
<description>"organizationName"</description>
<description>string</description>
</item>
<item>
<term>DepartmentName</term>
<description>"departmentName"</description>
<description>string</description>
</item>
<item>
<term>JobTitle</term>
<description>"jobTitle"</description>
<description>string</description>
</item>
<item>
<term>Birthday</term>
<description>"birthday"</description>
<description>NSDateComponents</description>
</item>
<item>
<term>Note</term>
<description>"note"</description>
<description>string</description>
</item>
<item>
<term>PhoneNumbers</term>
<description>"phoneNumbers"</description>
<description>Array of LabeledValue with PhoneNumber values</description>
</item>
<item>
<term>EmailAddresses</term>
<description>"emailAddresses"</description>
<description>array of LabeledValue with string values</description>
</item>
<item>
<term>UrlAddresses</term>
<description>"urlAddresses"</description>
<description>array of LabeledValue with string values</description>
</item>
<item>
<term>Dates</term>
<description>"dates"</description>
<description>array of LabeledValue with NSDateComponents values</description>
</item>
<item>
<term>InstantMessageAddresses</term>
<description>"instantMessageAddresses"</description>
<description>array of LabeledValue with InstantMessageAddress values</description>
</item>
<item>
<term>RelatedNames</term>
<description>"relatedNames"</description>
<description>array of LabeledValue with string values</description>
</item>
<item>
<term>SocialProfiles</term>
<description>"socialProfiles"</description>
<description>array of LabeledValue with SocialProfile values</description>
</item>
<item>
<term>PostalAddresses</term>
<description>"postalAddresses"</description>
<description>array of LabeledValue with PostalAddress values</description>
</item>
</list>
<example>
<code lang="csharp lang-csharp"><![CDATA[[Register ("CompatibleEmailPickerViewController")]
public class CompatibleEmailPickerViewController : UIViewController
{
[Outlet]
UILabel ResultLabel { get ; set; }
public CompatibleEmailPickerViewController (IntPtr handle)
: base (handle)
{
}
[Export("showPicker:")]
void ShowPicker(NSObject sender)
{
ABPeoplePickerNavigationController picker = new ABPeoplePickerNavigationController ();
// Hook up to both events to support iOS 7 and iOS 8 idioms
// Hooks up to the iOS 7 and lower idioms
picker.SelectPerson += HandleSelectPerson;
picker.PerformAction += HandlePerformAction;
// Hook up to the new iOS 8 idioms and parameters
picker.SelectPerson2 += HandleSelectPerson2;
picker.PerformAction2 += HandlePerformAction2;
picker.Cancelled += HandleCancelled;
// The people picker will only display the person's name,
// image and email properties in ABPersonViewController.
picker.DisplayedProperties.Add (ABPersonProperty.Email);
// The people picker will enable selection
// of persons that have at least one email address.
if(picker.RespondsToSelector(new Selector("setPredicateForEnablingPerson:")))
picker.PredicateForEnablingPerson = NSPredicate.FromFormat ("emailAddresses.@count > 0");
// The people picker will select a person that has exactly one email address and
// call peoplePickerNavigationController:didSelectPerson:,
// otherwise the people picker will present an ABPersonViewController for the
// user to pick one of the email addresses.
if(picker.RespondsToSelector(new Selector("setPredicateForSelectionOfPerson:")))
picker.PredicateForSelectionOfPerson = NSPredicate.FromFormat ("emailAddresses.@count = 1");
PresentViewController (picker, true, null);
}
// iOS7 and below
void HandleSelectPerson (object sender, ABPeoplePickerSelectPersonEventArgs e)
{
var peoplePicker = (ABPeoplePickerNavigationController)sender;
e.Continue = false;
using (ABMultiValue<string> emails = e.Person.GetEmails ())
e.Continue = emails.Count == 1;
if (!e.Continue) {
ResultLabel.Text = PersonFormatter.GetPickedEmail (e.Person);
peoplePicker.DismissViewController (true, null);
}
}
// iOS8+
void HandleSelectPerson2 (object sender, ABPeoplePickerSelectPerson2EventArgs e)
{
ResultLabel.Text = PersonFormatter.GetPickedEmail (e.Person);
}
// iOS7 and below
void HandlePerformAction (object sender, ABPeoplePickerPerformActionEventArgs e)
{
var peoplePicker = (ABPeoplePickerNavigationController)sender;
ResultLabel.Text = PersonFormatter.GetPickedEmail (e.Person, e.Identifier);
peoplePicker.DismissViewController (true, null);
e.Continue = false;
}
// iOS8+
void HandlePerformAction2 (object sender, ABPeoplePickerPerformAction2EventArgs e)
{
ResultLabel.Text = PersonFormatter.GetPickedEmail (e.Person, e.Identifier);
}
void HandleCancelled (object sender, EventArgs e)
{
var peoplePicker = (ABPeoplePickerNavigationController)sender;
peoplePicker.DismissViewController (true, null);
}
}]]></code>
</example>.</remarks>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Shared_Resources/Contacts/Choose_a_Contact">Choose a Contact</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Shared_Resources/Contacts/Create_a_new_Contact">Create a New Contact</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Shared_Resources/Contacts/Find_a_Contact">Find a Contact</related>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AddressBookUI/Reference/ABPeoplePickerNavigationController_Class/index.html">Apple documentation for <c>ABPeoplePickerNavigationController</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,10 @@
<Documentation>
<Docs DocId="T:AddressBookUI.IABNewPersonViewControllerDelegate">
<summary>Interface representing the required methods (if any) of the protocol <see cref="T:AddressBookUI.ABNewPersonViewControllerDelegate" />.</summary>
<remarks>
<para>This interface contains the required methods (if any) from the protocol defined by <see cref="T:AddressBookUI.ABNewPersonViewControllerDelegate" />.</para>
<para>If developers create classes that implement this interface, the implementation methods will automatically be exported to Objective-C with the matching signature from the method defined in the <see cref="T:AddressBookUI.ABNewPersonViewControllerDelegate" /> protocol.</para>
<para>Optional methods (if any) are provided by the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Address%20Book%20UIABNew%20Person%20View%20Controller%20Delegate_%20Extensions&amp;scope=Xamarin" title="T:AddressBookUI.ABNewPersonViewControllerDelegate_Extensions">T:AddressBookUI.ABNewPersonViewControllerDelegate_Extensions</a></format> class as extension methods to the interface, allowing developers to invoke any optional methods on the protocol.</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,10 @@
<Documentation>
<Docs DocId="T:AddressBookUI.IABPersonViewControllerDelegate">
<summary>Interface representing the required methods (if any) of the protocol <see cref="T:AddressBookUI.ABPersonViewControllerDelegate" />.</summary>
<remarks>
<para>This interface contains the required methods (if any) from the protocol defined by <see cref="T:AddressBookUI.ABPersonViewControllerDelegate" />.</para>
<para>If developers create classes that implement this interface, the implementation methods will automatically be exported to Objective-C with the matching signature from the method defined in the <see cref="T:AddressBookUI.ABPersonViewControllerDelegate" /> protocol.</para>
<para>Optional methods (if any) are provided by the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Address%20Book%20UIABPerson%20View%20Controller%20Delegate_%20Extensions&amp;scope=Xamarin" title="T:AddressBookUI.ABPersonViewControllerDelegate_Extensions">T:AddressBookUI.ABPersonViewControllerDelegate_Extensions</a></format> class as extension methods to the interface, allowing developers to invoke any optional methods on the protocol.</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,10 @@
<Documentation>
<Docs DocId="T:CloudKit.CKContainer">
<summary>Encapsulates content associated with an app, including shared and per-user private data.</summary>
<remarks>
<para>The <see cref="T:CloudKit.CKContainer" /> class is the highest-level class in the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Cloud%20Kit&amp;scope=Xamarin" title="N:CloudKit">N:CloudKit</a></format> namespace. A CloudKit Container (sometimes referred to in Apple documentation as a "Ubiquity container") is an information store identified with a particular name. Apple advises using a name of the form "iCloud.{reverse DNS}.{appName}", for instance, "iCloud.com.mycompany.MyApp". Developers must create and configure their containers using the iCloud Dashboard, available by way of the Apple developer portal. To retrieve a container, developers use the static <see cref="M:CloudKit.CKContainer.FromIdentifier(System.String)" /> method.</para>
<para>A single <see cref="T:CloudKit.CKContainer" /> may contain both public data, which is shared between all instances of the app (see <see cref="P:CloudKit.CKContainer.PublicCloudDatabase" />), and private data, which contains user-specific data (see <see cref="P:CloudKit.CKContainer.PrivateCloudDatabase" />). Within iCloud, public data is stored in the app's iCloud storage while private data is stored in the user's private iCloud storage. The public <see cref="T:CloudKit.CKDatabase" /> is available to all connected users of the app, whether or not they are logged in to iCloud. The private <see cref="T:CloudKit.CKDatabase" /> is only if the user is logged in to iCloud. Developers must write their code such that it reacts gracefully to changes in the user's login or connection status.</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/CloudKit/Reference/CKContainer_class/index.html">Apple documentation for <c>CKContainer</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,23 @@
<Documentation>
<Docs DocId="T:CoreAnimation.CAAnimation">
<summary>Base class for animations.</summary>
<remarks>
<para>Layer-based animations are disabled by <see cref="T:UIKit.UIView" />s except within <see cref="T:UIKit.UIView" /> animation blocks. Layer-based animations within such blocks ignore the blocks' duration and operate at their own specified duration, either the implicit default of 0.25 seconds or an explicit length. This is shown in the following example, in which the <see cref="T:UIKit.UIView" /> animation block's duration is 1.0, but in actuality, the layer-based implicit opacity animation ends in 0.25 seconds and the re-positioning runs for 10 seconds.</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
UIView.AnimateAsync(1.0, () => {
imgView.Layer.Opacity = 0.0f;
var theAnim = CABasicAnimation.FromKeyPath("position");
theAnim.From = NSObject.FromObject(firstPosition);
theAnim.To = NSObject.FromObject(secondPosition);
theAnim.Duration = 10.0;
imgView.Layer.AddAnimation(theAnim, "AnimateFrame");
});
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CAAnimation_class/index.html">Apple documentation for <c>CAAnimation</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,76 @@
<Documentation>
<Docs DocId="T:CoreAnimation.CABasicAnimation">
<summary>Single keyframe based animations.</summary>
<remarks>
<para>
The animation is created by calling the <see cref="M:CoreAnimation.CABasicAnimation.FromKeyPath(System.String)" />
method and providing a keyPath that identifies the property on
the target that will be animated. The animation is performed by using the values on the From, To and By properties applied to the keyPath.
</para>
<para>The interpolation will depend on the values that you choose to set for From, To and By.  The following table shows the effects of setting one or more of those properites on the property referenced by the key path:</para>
<list type="table">
<listheader>
<term>Settings</term>
<description>Result</description>
</listheader>
<item>
<term>No values are set</term>
<description>Interpolates between old value and new value in the presentation layer.</description>
</item>
<item>
<term>From is set</term>
<description>Interpolation is done from the From value to the current value in the presentation layer.</description>
</item>
<item>
<term>To is set</term>
<description>Interpolation is done from the current value in the presentation layer to the To value.</description>
</item>
<item>
<term>By is set</term>
<description>Interpolation is done between the current value on the presentation layer to the current value plus the value in By.</description>
</item>
<item>
<term>From and To are set</term>
<description>Interpolation is done between the From and To values.</description>
</item>
<item>
<term>From and By are set</term>
<description>Interpolation is done between the value set in From to From plus By.</description>
</item>
<item>
<term>To and By are set</term>
<description>Interpolation is done between To minus By and By.</description>
</item>
</list>
<para>
The From, To and By properties all take NSObject parameters.
If you need to specify other parameters, like a CGColor, you
can use the methods that take INativeObject parameters
(GetByAs, GetFromAs, GetToAs, SetBy, SetFrom, SetTo).
</para>
<para>
For example, the following will animate the "radius" property
for three seconds, from its current value, to the value 120
and will repeat this ten times.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var radiusAnimation = CABasicAnimation.FromKeyPath ("radius");
radiusAnimation.Duration = 3;
radiusAnimation.To = NSNumber.FromDouble (120);
radiusAnimation.RepeatCount = 10;]]></code>
</example>
<para>The above works to set the “To” property to an NSObject, in this case the number 120.   If you want to set this for other kinds of objects, you can use the SetTo method, for example, the following sets the target color to a CGColor:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var radiusAnimation = CABasicAnimation.FromKeyPath ("shadowColor");
radiusAnimation.Duration = 3;
radiusAnimation.SetTo (UIColor.Red.CGColor);
radiusAnimation.RepeatCount = 10;]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CABasicAnimation_class/index.html">Apple documentation for <c>CABasicAnimation</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreAnimation.CADisplayLink">
<summary>Synchronization object between your animations and the display refresh.</summary>
<remarks>
<para>
The display link object is a timer that can be used to
synchronize your drawing with the screen refresh rate. Once
you create your CADisplayLink, you need to add it to a runloop
by using the <see cref="M:CoreAnimation.CADisplayLink.AddToRunLoop(Foundation.NSRunLoop,Foundation.NSString)" />
method.
</para>
<para>
Using the display link ensures that your application will not
suffer from display glitches like screen tearing and micro-stuttering.
</para>
<para>
You can pause the display link by setting the <see cref="P:CoreAnimation.CADisplayLink.Paused" />
property. And you can remove your display link from any
registered run loops by calling the <see cref="M:CoreAnimation.CADisplayLink.Invalidate" />
method.
</para>
<para>
By default the timer is triggered sixty times per second. If
your application does not need this level of precision, set
the FrameInterval property to skip one or more updates. For
example, setting FrameInterval to two, would invoke your
target method thirty times per second.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
GLKView myGlView;
void Setup ()
{
CADisplayLink displayLink = CADisplayLink.Create (Display);
displayLink.AddToRunLoop (NSRunLoop.Main, NSRunLoop.UITrackingRunLoopMode);
}
void Display ()
{
myGLView.Display ();
}
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CADisplayLink_ClassRef/index.html">Apple documentation for <c>CADisplayLink</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,216 @@
<Documentation>
<Docs DocId="T:CoreAnimation.CALayer">
<summary>Layers hold the images that are rendered into the screen.</summary>
<remarks>
<para>
CALayers hold the image content that is rendered into the
screen. They encapsulate position, styling, size and
transformation components. They also implement the
CAMediaTiming methods which allows them to participate in
animations.
</para>
<para>
There are several subclasses of CALayer that developers can use:
<see cref="T:CoreAnimation.CAEmitterLayer" />,
<see cref="T:CoreAnimation.CAGradientLayer" />,
<format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=T:CoreAnimation.CAEAGLLayer/CAOpenGLLayer&amp;scope=Xamarin" title="T:CoreAnimation.CAEAGLLayer/CAOpenGLLayer">T:CoreAnimation.CAEAGLLayer/CAOpenGLLayer</a></format>,
<see cref="T:CoreAnimation.CAReplicatorLayer" />,
<see cref="T:CoreAnimation.CAScrollLayer" />,
<see cref="T:CoreAnimation.CAShapeLayer" />,
<see cref="T:CoreAnimation.CATextLayer" />,
<see cref="T:CoreAnimation.CATiledLayer" />,
<see cref="T:CoreAnimation.CATransformLayer" /> and
<format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Core%20Animation%20QCComposition%20Layer&amp;scope=Xamarin" title="T:CoreAnimation.QCCompositionLayer">T:CoreAnimation.QCCompositionLayer</a></format>.
</para>
<format type="text/html">
<h2>Layer Content</h2>
</format>
<para>
There are three ways of providing content to a layer:
subclassing the layer class and overriding the draw methods,
using a layer delegate to implement the drawing or assigning a
static image to the layer.
</para>
<para>
To set the contents of the layer with a static image or from one of the rendering approaches, app devs must
assign a <see cref="T:CoreGraphics.CGImage" /> to the
<see cref="P:CoreAnimation.CALayer.Contents" />
property. For static content, they can just assign this property and the changes will be reflected directly.
</para>
<format type="text/html">
<h3>Contents by Subclassing CALayer</h3>
</format>
<para>
If you choose to subclass the CALayer class, you can either
subclass the <see cref="M:CoreAnimation.CALayer.Display" /> method
which is then requires to set the <see cref="P:CoreAnimation.CALayer.Contents" /> property
or you can override the <see cref="M:CoreAnimation.CALayer.DrawInContext(CoreGraphics.CGContext)" /> method which provides you with a graphics context that you
can use to render into the display.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Overriding DrawInContext
public class DemoLayer : CALayer {
public override void DrawInContext (CGContext context)
{
base.DrawInContext (context);
// Fill in circle
context.SetFillColor (Color);
context.SetShadowWithColor (SizeF.Empty, 10.0f, glowColor);
context.EOFillPath();
}
}
// Overriding Display
public class DemoLayer2 : CALayer {
CGImage image = UIImage.FromBundle ("demo.png").CGImage;
public override void Display ()
{
Contents = image;
}
}
]]></code>
</example>
<format type="text/html">
<h3>Contents by Providing a CALayerDelegate</h3>
</format>
<para>
This approach can be used if the developer does not want to change the
class used for their CALayer rendering, and all they need to do is
assign the <see cref="P:CoreAnimation.CALayer.Delegate" /> property
to an instance of a subclass of <see cref="T:CoreAnimation.CALayerDelegate" /> where they
either override the <see cref="M:CoreAnimation.CALayerDelegate.DisplayLayer(CoreAnimation.CALayer)" />
method in which they must set the <see cref="P:CoreAnimation.CALayer.Contents" /> property,
or they override the <see cref="M:CoreAnimation.CALayerDelegate.DrawLayer(CoreAnimation.CALayer,CoreGraphics.CGContext)" />
method and provide their own rendering code there.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Overriding DisplayLayer
public class DemoLayerDelegate : CALayerDelegate {
CGImage image = UIImage.FromBundle ("demo.png").CGImage;
public override void DisplayLayer (CALayer layer)
{
layer.Contents = image;
}
}
// Overriding DrawLayer
public class DemoLayerDelegate2 : CALayerDelegate {
public override DrawLayer (CALayer layer, CGContext context)
{
// Fill in circle
context.SetFillColor (Color);
context.SetShadowWithColor (SizeF.Empty, 10.0f, glowColor);
context.EOFillPath();
}
}
// To use the code:
void SetupViews (UIView view, UIView view2)
{
view.Layer.Delegate = new DemoLayerDelegate ();
view2.Layer.Delegate = new DemoLayerDelegate2 ();
}
]]></code>
</example>
<format type="text/html">
<h2>Using Custom Layers with your UIViews or NSViews.</h2>
</format>
<para>
On iOS, every UIView automatically has a CALayer associated
with it. When you want to use one of the CALayer subclasses
as your UIView's backing layer, you need to add the following
code snippet to your class:
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
class MyView : UIView {
//
// This instructs the runtime that whenever a MyView is created
// that it should instantiate a CATiledLayer and assign that to the
// UIView.Layer property
//
[Export ("layerClass")]
public static Class LayerClass () {
return new Class (typeof (CATiledLayer));
}
}
]]></code>
</example>
<para>
If you want to subclass the CALayer class, you must provide a
constructor that takes a CALayer and is annotated with an
[Export ("initWithLayer:")] attribute. When you do this, you
should also override the <see cref="M:CoreAnimation.CALayer.Clone(CoreAnimation.CALayer)" /> as these
two are used to create copies of your layer state on demand in
response to CoreAnimation creating a mirror of your object
hierarchy if anyone accesses the <see cref="P:CoreAnimation.CALayer.PresentationLayer" />
property.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
public class MyLayer : CALayer {
UIColor FirstColor, SecondColor;
//
// Invoked by CoreAnimation if it needs to create a copy of your layer
// with a specific state in response to the user fetching the PresentationLayer
// property
//
[Export ("initWithLayer:")]
public MyLayer (Mylayer other) : base (layer)
{
// Do nothing, since we override Clone, but we could
// just clone the data here as well if we wanted to.
}
//
// This is the constructor you would use to create your new CALayer
public MyLayer (UIColor firstColor, UIColor secondColor)
{
FirstColor = firstColor;
SecondColor = secondColor;
}
// We must copy our own state here from the original layer
public override void Clone (CALayer _other)
{
MyLayer other = (MyLayer) _other;
FirstColor = other.FirstColor;
SecondColor = other.SecondColor;
}
}
]]></code>
</example>
<para>
On macOS, CALayers are optional. To enable them, you must set
the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=App%20Kit%20NSView%20Wants%20Layer&amp;scope=Xamarin" title="P:AppKit.NSView.WantsLayer">P:AppKit.NSView.WantsLayer</a></format> property
to true. You can change the layer for an NSView by setting
the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=App%20Kit%20NSView%20Layer&amp;scope=Xamarin" title="P:AppKit.NSView.Layer">P:AppKit.NSView.Layer</a></format> property.
</para>
<para>
On macOS, to change the default layer class used for a given
NSView, you can override the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=App%20Kit%20Make%20Backing%20Layer&amp;scope=Xamarin" title="M:AppKit.MakeBackingLayer*">M:AppKit.MakeBackingLayer*</a></format> method.
</para>
</remarks>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Animate_a_UIView_using_UIKit">Animate a UIView using UIKit</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Animate_Using_Blocks">Animate Using Blocks</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Create_a_Keyframe_Animation">Create a Keyframe Animation</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Create_an_Animation_Block">Create an Animation Block</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Create_An_Explicit_Animation">Create An Explicit Animation</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Animation/CoreAnimation/Create_an_Implicit_Animation">Create an Implicit Animation</related>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CALayer_class/index.html">Apple documentation for <c>CALayer</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,10 @@
<Documentation>
<Docs DocId="T:CoreData.INSFetchedResultsSectionInfo">
<summary>Interface representing the required methods (if any) of the protocol <see cref="T:CoreData.NSFetchedResultsSectionInfo" />.</summary>
<remarks>
<para>This interface contains the required methods (if any) from the protocol defined by <see cref="T:CoreData.NSFetchedResultsSectionInfo" />.</para>
<para>If developers create classes that implement this interface, the implementation methods will automatically be exported to Objective-C with the matching signature from the method defined in the <see cref="T:CoreData.NSFetchedResultsSectionInfo" /> protocol.</para>
<para>Optional methods (if any) are provided by the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Core%20Data%20NSFetched%20Results%20Section%20Info_%20Extensions&amp;scope=Xamarin" title="T:CoreData.NSFetchedResultsSectionInfo_Extensions">T:CoreData.NSFetchedResultsSectionInfo_Extensions</a></format> class as extension methods to the interface, allowing developers to invoke any optional methods on the protocol.</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,40 @@
<Documentation>
<Docs DocId="T:CoreGraphics.CGBlendMode">
<summary>Blending mode used during composition.</summary>
<remarks>
<para>The blend modes are used when composing images, the operations combine the color data with an alpha channel. The operations are called the Porter-Duff blending operations.</para>
<para>
For a detailed explanation see the PDF reference manual.
</para>
<para>In the explanation below, the following variables are used:</para>
<para>
<list type="table">
<listheader>
<term>Term</term>
<description>Description</description>
</listheader>
<item>
<term>R</term>
<description>Premultiplied result color</description>
</item>
<item>
<term>S</term>
<description>Source Color</description>
</item>
<item>
<term>D</term>
<description>Destination Color</description>
</item>
<item>
<term>Sa</term>
<description>Source alpha value</description>
</item>
<item>
<term>Da</term>
<description>Destination alpha value</description>
</item>
</list>
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,59 @@
<Documentation>
<Docs DocId="T:CoreGraphics.CGPathDrawingMode">
<summary>Drawing mode.</summary>
<remarks>
<para>This enumeration allows the application developer to choose between showing the fill, stroke, or both of a path. Additionally, it allows the developer to choose whether to use the Core Graphics standing "non-zero winding rule" fill mode or the "even-odd rule" fill mode.</para>
<para>Both the "non-zero winding rule" and the "even-odd rule" decide whether to fill a pixel by considering a line drawn from the point to outside the path.</para>
<para>The “non-zero winding rule” mode does not fill the pixel if the path crosses that line clockwise and counterclockwise an equal number of times. If the count of clockwise versus counterclockwise crossings is non-zero, the point is considered inside the path and is filled. As the following illustration shows, this makes path direction an important consideration.</para>
<para>
<img href="~/CoreGraphics/_images/CGPathDrawingMode.NonZeroWindingRule.png" alt="Graphic illustrating the non-zero winding rule" />
</para>
<para>The “even-odd” rule fills a pixel if the number of paths crossed is odd. It does not take the direction of the path into account.</para>
<para>
<img href="~/CoreGraphics/_images/CGPathDrawingMode.EvenOddRule.png" alt="Graphic illustrating the even-odd winding rule" />
</para>
<para>The following example shows a more complex situation. The top path is drawn with the "even-odd rule" (<see cref="F:CoreGraphics.CGPathDrawingMode.EOFillStroke" />) while the bottom is filled with the "non-zero winding rule" (<see cref="F:CoreGraphics.CGPathDrawingMode.FillStroke" />). In both cases, the path is both stroked in red and filled in green.</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
public override void Draw (RectangleF rect)
{
base.Draw (rect);
using (var ctxt = UIGraphics.GetCurrentContext ()) {
ctxt.ScaleCTM (1, -1);
ctxt.TranslateCTM (0, -Bounds.Height);
DrawPathWithWindingMode (ctxt, Bounds.Height / 2, CGPathDrawingMode.EOFillStroke);
DrawPathWithWindingMode (ctxt, 0, CGPathDrawingMode.FillStroke);
}
}
void DrawPathWithWindingMode (CGContext ctxt, float yOffset, CGPathDrawingMode mode)
{
var points = new PointF[] {
new PointF (50, 50),
new PointF (200, 50),
new PointF (200, 100),
new PointF (50, 100),
new PointF (50, 50),
new PointF (150, 50),
new PointF (150, 150),
new PointF (100, 150),
new PointF (100, 25)
};
points = points.Select (pt => new PointF(pt.X, pt.Y += yOffset)).ToArray();
ctxt.SetStrokeColor (UIColor.Red.CGColor);
ctxt.SetFillColor (UIColor.Green.CGColor);
ctxt.MoveTo (points [0].X, points [0].Y);
for (var i = 1; i < points.Length; i++) {
ctxt.AddLineToPoint (points [i].X, points [i].Y);
}
ctxt.DrawPath (mode);
}
]]></code>
</example>
<para>
<img href="~/CoreGraphics/_images/CGPathDrawingMode.NonZeroVsEvenOdd.png" alt="Graphic created by the previous code, illustrating the two different winding rules." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIAdditionCompositing">
<summary>The CIAdditionCompositing CoreImage filter.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from image files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the CIAdditionCompositing filter with the two images.
var addComp = new CIAdditionCompositing ()
{
Image = heron,
BackgroundImage = clouds,
};
// Get the resulting Composition
var output = addComp.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/AdditionComposition.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,58 @@
<Documentation>
<Docs DocId="T:CoreImage.CIAffineClamp">
<summary>A <see cref="T:CoreImage.CIAffineFilter" /> that extends the border pixels to the post-transform boundaries.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Scale the image to make it more interesting
flower = new CILanczosScaleTransform {
Image = flower,
Scale = .2f
}.OutputImage;
// Create a CIAffineClamp filter with the input image
var affine_clamp = new CIAffineClamp () {
Image = flower
};
// Get the clamped image from the filter
var output = new CICrop () {
Image = affine_clamp.OutputImage,
Rectangle = new CIVector (0, 0, 300, 200)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/affine_clamp.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,58 @@
<Documentation>
<Docs DocId="T:CoreImage.CIAffineTile">
<summary>A <see cref="T:CoreImage.CIAffineFilter" /> that tiles the transformed image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Scale the image to make it more interesting
flower = new CILanczosScaleTransform {
Image = flower,
Scale = .2f
}.OutputImage;
// Create a CIAffineTile filter with the input image
var affine_tile = new CIAffineTile () {
Image = flower
};
// Get the tiled image from the filter
var output = new CICrop () {
Image = affine_tile.OutputImage,
Rectangle = new CIVector (0, 0, 300, 200)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/affine_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,53 @@
<Documentation>
<Docs DocId="T:CoreImage.CIAffineTransform">
<summary>Performs an affine transform on an image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a File
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create an AffineTransform to Skew the Image
var transform = new CGAffineTransform (1F, .5F, .5F, 1F, 0F, 0F);
var affineTransform = new CIAffineTransform ()
{
Image = flower,
Transform = transform
};
// Get the Transformed Image
var output affineTransform.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/AffineTransform.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,59 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBarsSwipeTransition">
<summary>A <see cref="T:CoreImage.CITransitionFilter" /> that animates a transition by moving a bar over the source image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIBarsSwipeTransition filter with the input image
var bars_swipe_transition = new CIBarsSwipeTransition ()
{
Image = heron,
TargetImage = clouds,
Time = 0.5f
};
// Get the transition image from the filter
var output = bars_swipe_transition.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image inputs:
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/bars_swipe_transition.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,62 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBlendWithAlphaMask">
<summary>A <see cref="T:CoreImage.CIBlendWithMask" /> that uses a mask image to blend foreground and background images.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImages from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage xamarinAlpha = CIImage.FromCGImage (UIImage.FromFile ("XamarinAlpha.png").CGImage);
// Create a CIBlendWithAlphaMask filter with our three input images
var blend_with_alpha_mask = new CIBlendWithAlphaMask () {
BackgroundImage = clouds,
Image = flower,
Mask = xamarinAlpha
};
// Get the blended image from the filter
var output = blend_with_alpha_mask.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following background, image and mask inputs:
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
<img href="~/CoreImage/_images/XamarinAlpha.png" alt="Image with alpha channel" />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/blend_with_alpha_mask.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,62 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBlendWithMask">
<summary>A <see cref="T:CoreImage.CIBlendFilter" /> that uses a grayscale mask to blends its foreground and background images.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImages from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage xamarin = CIImage.FromCGImage (UIImage.FromFile ("Xamarin.png").CGImage);
// Create a CIBlendWithMask filter with our three input images
var blend_with_mask = new CIBlendWithMask () {
BackgroundImage = clouds,
Image = flower,
Mask = xamarin
};
// Get the blended image from the filter
var output = blend_with_mask.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following background, image and mask inputs:
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
<img href="~/CoreImage/_images/Xamarin.png" alt="Result of applying the filter." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/blend_with_mask.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBloom">
<summary>A <see cref="T:CoreImage.CIFilter" /> that creates an edge-flow effect.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIBloom filter with the input image
var bloom = new CIBloom () {
Image = flower
};
// Get the bloom image from the filter
var output = bloom.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/bloom.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBumpDistortion">
<summary>A <see cref="T:CoreImage.CIDistortionFilter" /> that creates a bump at the specified center point.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage xamarinCheck = CIImage.FromCGImage (UIImage.FromFile ("XamarinCheck.png").CGImage);
var width = xamarinCheck.Extent.Width;
var height = xamarinCheck.Extent.Height;
// Create a CIBumpDistortion filter with the input image, center raduis and scale
var bump_distortion = new CIBumpDistortion () {
Image = xamarinCheck,
Center = new CIVector (width/2f, height/2f),
Radius = .4f * height,
Scale = .5f
};
// Get the distorted image from the filter
var output = bump_distortion.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following input:
</para>
<para>
<img href="~/CoreImage/_images/XamarinCheck.png" alt="Logo on a checkered background" />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/bump_distortion.png" alt="Result of applying the filter." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,53 @@
<Documentation>
<Docs DocId="T:CoreImage.CIBumpDistortionLinear">
<summary>A filter that distorts the image around a convex or concave line.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage xamarinCheck = CIImage.FromCGImage (UIImage.FromFile ("XamarinCheck.png").CGImage);
var width = xamarinCheck.Extent.Width;
var height = xamarinCheck.Extent.Height;
// Create a CIBumpDistortionLinear filter with the input image
var bump_distortion_linear = new CIBumpDistortionLinear () {
Image = xamarinCheck,
Center = new CIVector (width * .5f, height * .5f),
Radius = .4f * height,
Scale = .5f,
Angle = (float)Math.PI * .5f
};
// Get the distorted image from the filter
var output = bump_distortion_linear.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/XamarinCheck.png" alt="Logo on a checkered background" />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/bump_distortion_linear.png" alt="Result of applying the filter." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,45 @@
<Documentation>
<Docs DocId="T:CoreImage.CICheckerboardGenerator">
<summary>The CICheckerboardGenerator CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the Two Colors for the Checkerboard
var c0 = CIColor.FromRgb (1, 0, 0);
var c1 = CIColor.FromRgb (0, 1, 0);
var checker = new CICheckerboardGenerator ()
{
Color0 = c0,
Color1 = c1,
Center = new CIVector (new float[] { 10 , 10 }), // Default [80 80]
Sharpness = 1F // Default 1
};
// The Generator Filters need to be cropped before they can be displayed
var crop = new CICrop()
{
Image = checker.OutputImage,
// Create the Bounds based on the Size of the application Window. (UIWindow)
Rectangle = new CIVector(0, 0, window.Bounds.Width, window.Bounds.Height)
};
// Get the final Generated Image
var output = crop.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CICircleSplashDistortion">
<summary>Makes the pixels at the circumference of a circle spread out to the boundaries of the image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CICircleSplashDistortion filter with the input image
var circle_splash_distortion = new CICircleSplashDistortion () {
Image = heron,
};
// Get the distorted image from the filter
var output = new CICrop {
Image = circle_splash_distortion.OutputImage,
Rectangle = new CIVector (0, 0, heron.Extent.Width, heron.Extent.Height)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/circle_splash_distortion.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CICircularScreen">
<summary>A <see cref="T:CoreImage.CIScreenFilter" /> that creates a circular bulls-eye-style halftone screen.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CICircularScreen filter with the input image
var cilcular_screen = new CICircularScreen () {
Image = flower
};
// Get the altered image from the filter
var output = cilcular_screen.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/cilcular_screen.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,55 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorBlendMode">
<summary>The CIColorBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create our CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the CIColorBlend Filter with our two Images
var colorBlend = new CIColorBlendMode ()
{
Image = heron,
BackgroundImage = clouds
};
var output = colorBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,57 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorBurnBlendMode">
<summary>The CIColorBurnBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create our CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the ColorBurnBlend filter
var colorBurn = new CIColorBurnBlendMode()
{
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the Filter
var output = colorBurn.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorBurnBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorClamp">
<summary>A filter that constrains the color values between the range specified.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIColorClamp filter with the input image
var color_clamp = new CIColorClamp ()
{
Image = flower,
InputMinComponents = new CIVector (.1f, 0f, .1f, 0),
InputMaxComponents = new CIVector (.6f, 1f, .6f, 1),
};
// Get the clamped image from the filter
var output = color_clamp.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_clamp.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,53 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorControls">
<summary>The CIColorControls CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Load our Image from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create the CIColorControls Filter
var colorCtrls = new CIColorControls ()
{
Image = flower,
Brightness = .5F, // Min: 0 Max: 2
Saturation = 1.2F, // Min: -1 Max: 1
Contrast = 3.1F // Min: 0 Max: 4
};
// Get the Resulting image from the filter
return colorCtrls.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorControls.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorCrossPolynomial">
<summary>A filter that modifies the source pixels by applying a set of polynomial cross-products.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIColorCrossPolynomial filter with the input image
var color_cross_polynomial = new CIColorCrossPolynomial () {
Image = flower,
RedCoefficients = new CIVector (new float []{1, 0, 0, 0, 0, 0, 0, 0, 0, 0}),
GreenCoefficients = new CIVector (new float []{0, 1, 0, 0, 0, 0, 0, 0, 0, 0}),
BlueCoefficients = new CIVector (new float []{1, 0, 1, 0, -20, 0, 0, 0, 0, 0}),
};
// Get the altered image from the filter
var output = color_cross_polynomial.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_cross_polynomial.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,68 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorCube">
<summary>The CIColorCube CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Setup the color cube data
float [] color_cube_data = {
0, 0, 0, 1,
.1f, 0, 1, 1,
0, 1, 0, 1,
1, 1, 0, 1,
0, 0, 1, 1,
1, 0, 1, 1,
0, 1, 1, 1,
1, 1, 1, 1
};
var byteArray = new byte[color_cube_data.Length * 4];
Buffer.BlockCopy(color_cube_data, 0, byteArray, 0, byteArray.Length);
var data = NSData.FromArray (byteArray);
// Create a CIColorCube filter with the input image
var color_cube = new CIColorCube ()
{
Image = flower,
CubeDimension = 2,
CubeData = data
};
// Get the altered image from the filter
var output = color_cube.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_cube.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,70 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorCubeWithColorSpace">
<summary>A filter that modifies the source pixels using a 3D color-table and then maps the result to a color space.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Setup the color cube data
float [] color_cube_data = {
0, 0, 0, 1,
.1f, 0, 1, 1,
0, 1, 0, 1,
1, 1, 0, 1,
0, 0, 1, 1,
1, 0, 1, 1,
0, 1, 1, 1,
1, 1, 1, 1
};
var byteArray = new byte[color_cube_data.Length * 4];
Buffer.BlockCopy(color_cube_data, 0, byteArray, 0, byteArray.Length);
var data = NSData.FromArray (byteArray);
// Create a CIColorCubeWithColorSpace filter with the input image
using (var cs = CGColorSpace.CreateDeviceRGB ()) {
var color_cube_with_color_space = new CIColorCubeWithColorSpace () {
Image = flower,
CubeDimension = 2,
CubeData = data,
ColorSpace = cs
};
// Get the altered image from the filter
var output = color_cube_with_color_space.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
}
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_cube_with_color_space.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,55 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorDodgeBlendMode">
<summary>The CIColorDodgeBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Get our CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the ColorDodgeBlend filter
var colorDodgeBlend = new CIColorDodgeBlendMode ()
{
Image = heron,
BackgroundImage = clouds,
};
var output = colorDodgeBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorDodgeBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorInvert">
<summary>The CIColorInvert CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a File.
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create the Color Invert Filter
var invert = new CIColorInvert ()
{
Image = flower
};
// Get the Filtered Image
var output = invert.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorInvert.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,51 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorMap">
<summary>Changes colors based on an input gradient image's mapping.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIColorMap filter with the input image
var color_map = new CIColorMap ()
{
Image = flower,
GradientImage = flower
};
// Get the altered image from the filter
var output = color_map.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_map.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,63 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorMatrix">
<summary>The CIColorMatrix CoreImage filter.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Setup our Vectors used by the CIColorMatrix Filter
var rVector = new CIVector (.5F, 0F, 0F); // Multiple the Red Values by .5 (s.r = dot(s, rVector))
var gVector = new CIVector (0F, 1.5F, 0F); // Multiple the Green Vector by 1.5 (s.g = dot(s, gVector))
var bVector = new CIVector (0F, 0F, .75F); // Multiple the Blue Vector by .75 (s.b = dot(s, bVector))
var aVector = new CIVector (0F, 0F, 0F, 1.25F); // Multiple the Alpha values by 1.25 (s.a = dot(s, bVector))
var biasVector = new CIVector (0, 1, 0, 0); // A Bias to be Added to each Color Vector (s = s + bias)
// Constructor the CIColorMatrix Filter
var colorMatrix = new CIColorMatrix ()
{
Image = flower,
RVector = rVector,
GVector = gVector,
BVector = bVector,
AVector = aVector,
BiasVector = biasVector
};
var output = colorMatrix.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorMatrix.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,54 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorMonochrome">
<summary>The CIColorMonochrome CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Make it Purple R + B = Purple
var inputColor = new CIColor (new CGColor (100F, 0F, 100F));
// Create our CIColorMonochrome filter
var monoChrome = new CIColorMonochrome ()
{
Image = flower,
Color = inputColor,
Intensity = 1F, // Default 1
};
var output = monoChrome.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ColorMonochrome.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,54 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorPolynomial">
<summary>A filter that modifies the source pixels by applying a set of cubic polynomials. </summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIColorPolynomial filter with the input image
var color_polynomial = new CIColorPolynomial ()
{
Image = flower,
RedCoefficients = new CIVector (0, 0, 0, .4f),
GreenCoefficients = new CIVector (0, 0, .5f, .8f),
BlueCoefficients = new CIVector (0, 0, .5f, 1),
AlphaCoefficients = new CIVector (0, 1, 1, 1),
};
// Get the altered image from the filter
var output = color_polynomial.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_polynomial.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIColorPosterize">
<summary>Reduces the number of levels for each color component.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIColorPosterize filter with the input image
var color_posterize = new CIColorPosterize () {
Image = flower,
Levels = 8
};
// Get the altered image from the filter
var output = color_posterize.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/color_posterize.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,42 @@
<Documentation>
<Docs DocId="T:CoreImage.CIConstantColorGenerator">
<summary>Generates a solid color.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the ConstantColorGenerator
var colorGen = new CIConstantColorGenerator ()
{
Color = new CIColor (UIColor.Blue)
};
// The Generator Filters need to be cropped before they can be displayed
var crop = new CICrop ()
{
Image = colorGen.OutputImage,
Rectangle = new CIVector (0, 0, window.Bounds.Width, window.Bounds.Height)
};
// Get the final Image from the Crop Filter
var output = crop.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
<img href="~/CoreImage/_images/CoreImage.CIConstantColorGenerator.png" alt="Result of applying the filter." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,54 @@
<Documentation>
<Docs DocId="T:CoreImage.CIConvolution3X3">
<summary>A filter that performs a custom 3x3 matrix convolution.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIConvolution3X3 filter with the input image
var convolution_3X3 = new CIConvolution3X3 () {
Image = heron,
Weights = new CIVector (new float [] {
0, -1, 0,
-1, 5, -1,
0, -1, 0}),
Bias = 0,
};
// Get the altered image from the filter
var output = convolution_3X3.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/convolution_3X3.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIConvolution5X5">
<summary>A filter that performs a custom 5x5 matrix convolution.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIConvolution5X5 filter with the input image
var convolution_5X5 = new CIConvolution5X5 () {
Image = heron,
Weights = new CIVector (new float [] {
.5f, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, .5f}),
Bias = 0,
};
// Get the altered image from the filter
var output = convolution_5X5.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/convolution_5X5.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIConvolution9Horizontal">
<summary>A filter that performs a horizontal convolution of 9 elements.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIConvolution9Horizontal filter with the input image
var convolution_9_horizontal = new CIConvolution9Horizontal () {
Image = heron,
Weights = new CIVector (new float [] {1, -1, 1, 0, 1, 0, -1, 1, -1}),
};
// Get the altered image from the filter
var output = convolution_9_horizontal.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/convolution_9_horizontal.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIConvolution9Vertical">
<summary>A filter that performs a vertical convolution of 9 elements.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIConvolution9Vertical filter with the input image
var convolution_9_vertical = new CIConvolution9Vertical () {
Image = heron,
Weights = new CIVector (new float [] {1, -1, 1, 0, 1, 0, -1, 1, -1}),
};
// Get the altered image from the filter
var output = convolution_9_vertical.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/convolution_9_vertical.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,59 @@
<Documentation>
<Docs DocId="T:CoreImage.CICopyMachineTransition">
<summary>A <see cref="T:CoreImage.CITransitionFilter" /> that mimics the effect of a photocopier.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
// Create a CICopyMachineTransition filter with the input image
var copy_machine_transition = new CICopyMachineTransition ()
{
Image = heron,
TargetImage = clouds,
Time = 0.5f
};
// Get the altered image from the filter
var output = copy_machine_transition.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image inputs:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/copy_machine_transition.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,51 @@
<Documentation>
<Docs DocId="T:CoreImage.CICrop">
<summary>The CICrop CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CICrop filter with our Image and a Rectangular selection
var crop = new CICrop ()
{
Image = flower,
Rectangle = new CIVector (0, 0, 300, 300)
};
// Get the Cropped image from the filter
var output = crop.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/Crop.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDarkenBlendMode">
<summary>The CIDarkenBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the CIDarkenBlendMode filter
var darkenBlend = new CIDarkenBlendMode()
{
Image = heron,
BackgroundImage = clouds
};
// Get the Composite image from the filter
var output = darkenBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/DarkenBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,36 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDetector">
<summary>Image analysis class for face detection.</summary>
<remarks>
<para>
CIDetector is a general API to perform image analysis on an
image, but as of iOS5 only face detection is supported. You
initiate the face detection by calling the static method <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)&amp;scope=Xamarin" title="M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)">M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)</a></format>
and then get the results by calling one of the FeaturesInImage
overloads.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var imageFile = "photoFace2.jpg";
var image = new UIImage(imageFile);
var context = new CIContext ();
var detector = CIDetector.CreateFaceDetector (context, true);
var ciImage = CIImage.FromCGImage (image.CGImage);
var features = detector.GetFeatures (ciImage);
Console.WriteLine ("Found " + features.Length + " faces (origin bottom-left)");
foreach (var feature in features){
var facefeature = (CIFaceFeature) feature;
Console.WriteLine ("Left eye {0} {1}\n", facefeature.HasLeftEyePosition, facefeature.LeftEyePosition);
Console.WriteLine ("Right eye {0} {1}\n", facefeature.HasRightEyePosition, facefeature.RightEyePosition);
Console.WriteLine ("Mouth {0} {1}\n", facefeature.HasMouthPosition, facefeature.MouthPosition);
}
]]></code>
</example>
<para>Instances of <see cref="T:CoreImage.CIDetector" /> are expensive to initialize, so application developers should prefer to re-use existing instances rather than frequently creating new ones.</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/CoreImage/Reference/CIDetector_Ref/index.html">Apple documentation for <c>CIDetector</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDifferenceBlendMode">
<summary>The CIDifferenceBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the CIDifferenceBlend filter
var differenceBlend = new CIDifferenceBlendMode ()
{
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the filter
var output = differenceBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/DifferenceBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,63 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDisintegrateWithMaskTransition">
<summary>A <see cref="T:CoreImage.CITransitionFilter" /> that uses a mask to define the transition.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
CIImage xamarinCheck = CIImage.FromCGImage (UIImage.FromFile ("XamarinCheck.png").CGImage);
// Create a CIDisintegrateWithMaskTransition filter with the input image
var disintegrate_with_mask_transition = new CIDisintegrateWithMaskTransition ()
{
Image = clouds,
TargetImage = flower,
Mask = xamarinCheck
};
// Get the altered image from the filter
var output = disintegrate_with_mask_transition.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image inputs:
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
<img href="~/CoreImage/_images/XamarinCheck.png" alt="Logo on a checkered background" />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/disintegrate_with_mask_transition.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,59 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDissolveTransition">
<summary>A <see cref="T:CoreImage.CITransitionFilter" /> that performs a cross-dissolve.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
// Create a CIDissolveTransition filter with the input image
var dissolve_transition = new CIDissolveTransition ()
{
Image = heron,
TargetImage = clouds,
Time = 0.5f
};
// Get the altered image from the filter
var output = dissolve_transition.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image inputs:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/dissolve_transition.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CIDotScreen">
<summary>A <see cref="T:CoreImage.CIScreenFilter" /> that screens with a halftone dot pattern.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIDotScreen filter with the input image
var dot_screen = new CIDotScreen () {
Image = flower
};
// Get the altered image from the filter
var output = dot_screen.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/dot_screen.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIEightfoldReflectedTile">
<summary>A <see cref="T:CoreImage.CITileFilter" /> that applies 8-way reflected symmetry.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIEightfoldReflectedTile filter with the input image
var eightfold_reflected_tile = new CIEightfoldReflectedTile () {
Image = flower
};
// Get the altered image from the filter
var output = new CICrop {
Image = eightfold_reflected_tile.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/eightfold_reflected_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIExclusionBlendMode">
<summary>The CIExclusionBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the Exclusion Blend filter
var exclusionBlend = new CIExclusionBlendMode ()
{
Image = heron,
BackgroundImage = clouds
};
// Get the compoite image from the Filter
var output = exclusionBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ExclusionBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,51 @@
<Documentation>
<Docs DocId="T:CoreImage.CIExposureAdjust">
<summary>The CIExposureAdjust CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create the ExposureAdjust filter
var exposureAdjust = new CIExposureAdjust ()
{
Image = flower,
EV = 2F // Default value: 0.50 Minimum: 0.00 Maximum: 0.00 Slider minimum: -10.00 Slider maximum: 10.00 Identity: 0.00
};
// Get the resulting image from the filter
var output = exposureAdjust.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/ExposureAdjust.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,54 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFalseColor">
<summary>The CIFalseColor CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create the two new colors to be used in the filter
var color0 = new CIColor (new CGColor (255F, 251F, 0F)); // A Yellowish Color
var color1 = new CIColor (new CGColor (51F, 0F, 255F)); // A Purplish Color
var falseColor = new CIFalseColor ()
{
Image = flower,
Color0 = color0,
Color1 = color1
};
// Get the color adjusted image from the filter
var output = falseColor.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/FalseColor.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,66 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFilter">
<summary>CoreImage image filter.</summary>
<remarks>
<para>
On iOS 7.0, the following are the built-in filters:
</para>
<para>
Compositing Filters:
<list type="bullet"><item><term><see cref="T:CoreImage.CIAdditionCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMaximumCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMinimumCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMultiplyCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceAtopCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceInCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceOutCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceOverCompositing" /></term></item></list></para>
<para>
Affine Filters (<see cref="T:CoreImage.CIAffineFilter" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIAffineClamp" /></term></item><item><term><see cref="T:CoreImage.CIAffineTile" /></term></item><item><term><see cref="T:CoreImage.CIAffineTransform" /></term></item></list></para>
<para>
Blend Filters (<see cref="T:CoreImage.CIBlendFilter" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIBlendWithMask" /></term></item><item><term><see cref="T:CoreImage.CIColorBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIColorBurnBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIColorDodgeBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIDarkenBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIDifferenceBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIExclusionBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIHardLightBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIHueBlendMode" /></term></item><item><term><see cref="T:CoreImage.CILightenBlendMode" /></term></item><item><term><see cref="T:CoreImage.CILuminosityBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIMultiplyBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIOverlayBlendMode" /></term></item><item><term><see cref="T:CoreImage.CISaturationBlendMode" /></term></item><item><term><see cref="T:CoreImage.CIScreenBlendMode" /></term></item><item><term><see cref="T:CoreImage.CISoftLightBlendMode" /></term></item></list></para>
<para>
Compositing Filters (<see cref="T:CoreImage.CICompositingFilter" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIAdditionCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMaximumCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMinimumCompositing" /></term></item><item><term><see cref="T:CoreImage.CIMultiplyCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceAtopCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceInCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceOutCompositing" /></term></item><item><term><see cref="T:CoreImage.CISourceOverCompositing" /></term></item></list></para>
<para>
Convolution Filters (<see cref="T:CoreImage.CIConvolutionCore" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIConvolution3X3" /></term></item><item><term><see cref="T:CoreImage.CIConvolution5X5" /></term></item><item><term><see cref="T:CoreImage.CIConvolution9Horizontal" /></term></item><item><term><see cref="T:CoreImage.CIConvolution9Vertical" /></term></item></list></para>
<para>
Distortion Filters (<see cref="T:CoreImage.CIDistortionFilter" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIBumpDistortion" /></term></item><item><term><see cref="T:CoreImage.CIBumpDistortionLinear" /></term></item><item><term><see cref="T:CoreImage.CICircleSplashDistortion" /></term></item><item><term><see cref="T:CoreImage.CIHoleDistortion" /></term></item><item><term><see cref="T:CoreImage.CIPinchDistortion" /></term></item><item><term><see cref="T:CoreImage.CITwirlDistortion" /></term></item><item><term><see cref="T:CoreImage.CIVortexDistortion" /></term></item></list></para>
<para>
Photo Effects (<see cref="T:CoreImage.CIPhotoEffect" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIPhotoEffectChrome" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectFade" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectInstant" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectMono" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectNoir" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectProcess" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectTonal" /></term></item><item><term><see cref="T:CoreImage.CIPhotoEffectTransfer" /></term></item></list></para>
<para>
Transition Filters (<see cref="T:CoreImage.CITransitionFilter" />):
<list type="bullet"><item><term><see cref="T:CoreImage.CIBarsSwipeTransition" /></term></item><item><term><see cref="T:CoreImage.CICopyMachineTransition" /></term></item><item><term><see cref="T:CoreImage.CIDisintegrateWithMaskTransition" /></term></item><item><term><see cref="T:CoreImage.CIDissolveTransition" /></term></item><item><term><see cref="T:CoreImage.CIFlashTransition" /></term></item><item><term><see cref="T:CoreImage.CIModTransition" /></term></item><item><term><see cref="T:CoreImage.CISwipeTransition" /></term></item></list></para>
<para>Specialized Filters:
<list type="bullet"><item><term><see cref="T:CoreImage.CIBloom" /></term></item><item><term><see cref="T:CoreImage.CICheckerboardGenerator" /></term></item><item><term><see cref="T:CoreImage.CIColorClamp" /></term></item><item><term><see cref="T:CoreImage.CIColorControls" /></term></item><item><term><see cref="T:CoreImage.CIColorCrossPolynomial" /></term></item><item><term><see cref="T:CoreImage.CIColorCube" /></term></item><item><term><see cref="T:CoreImage.CIColorInvert" /></term></item><item><term><see cref="T:CoreImage.CIColorMap" /></term></item><item><term><see cref="T:CoreImage.CIColorMatrix" /></term></item><item><term><see cref="T:CoreImage.CIColorMonochrome" /></term></item><item><term><see cref="T:CoreImage.CIColorPosterize" /></term></item><item><term><see cref="T:CoreImage.CIConstantColorGenerator" /></term></item><item><term><see cref="T:CoreImage.CICrop" /></term></item><item><term><see cref="T:CoreImage.CIExposureAdjust" /></term></item><item><term><see cref="T:CoreImage.CIFaceBalance" /></term></item><item><term><see cref="T:CoreImage.CIFalseColor" /></term></item><item><term><see cref="T:CoreImage.CIGammaAdjust" /></term></item><item><term><see cref="T:CoreImage.CIGaussianBlur" /></term></item><item><term><see cref="T:CoreImage.CIGaussianGradient" /></term></item><item><term><see cref="T:CoreImage.CIGloom" /></term></item><item><term><see cref="T:CoreImage.CIHighlightShadowAdjust" /></term></item><item><term><see cref="T:CoreImage.CIHueAdjust" /></term></item><item><term><see cref="T:CoreImage.CILanczosScaleTransform" /></term></item><item><term><see cref="T:CoreImage.CILightTunnel" /></term></item><item><term><see cref="T:CoreImage.CILinearGradient" /></term></item><item><term><see cref="T:CoreImage.CILinearToSRGBToneCurve" /></term></item><item><term><see cref="T:CoreImage.CIMaskToAlpha" /></term></item><item><term><see cref="T:CoreImage.CIMaximumComponent" /></term></item><item><term><see cref="T:CoreImage.CIMinimumComponent" /></term></item><item><term><see cref="T:CoreImage.CIPerspectiveTile" /></term></item><item><term><see cref="T:CoreImage.CIPerspectiveTransform" /></term></item><item><term><see cref="T:CoreImage.CIPixellate" /></term></item><item><term><see cref="T:CoreImage.CIQRCodeGenerator" /></term></item><item><term><see cref="T:CoreImage.CIRadialGradient" /></term></item><item><term><see cref="T:CoreImage.CIRandomGenerator" /></term></item><item><term><see cref="T:CoreImage.CIScreenFilter" /></term></item><item><term><see cref="T:CoreImage.CISepiaTone" /></term></item><item><term><see cref="T:CoreImage.CISharpenLuminance" /></term></item><item><term><see cref="T:CoreImage.CISRGBToneCurveToLinear" /></term></item><item><term><see cref="T:CoreImage.CIStarShineGenerator" /></term></item><item><term><see cref="T:CoreImage.CIStraightenFilter" /></term></item><item><term><see cref="T:CoreImage.CIStripesGenerator" /></term></item><item><term><see cref="T:CoreImage.CITemperatureAndTint" /></term></item><item><term><see cref="T:CoreImage.CITileFilter" /></term></item><item><term><see cref="T:CoreImage.CIToneCurve" /></term></item><item><term><see cref="T:CoreImage.CITriangleKaleidoscope" /></term></item><item><term><see cref="T:CoreImage.CIUnsharpMask" /></term></item><item><term><see cref="T:CoreImage.CIVibrance" /></term></item><item><term><see cref="T:CoreImage.CIVignette" /></term></item><item><term><see cref="T:CoreImage.CIVignetteEffect" /></term></item><item><term><see cref="T:CoreImage.CIWhitePointAdjust" /></term></item></list></para>
<para>
To create a filter of the specified type, instantiate an
instance of one of the above types, assign values to their
properties and extract the result by using the OutputImage
property.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var sepiaFilter = new CISepiaTone () {
Image = mySourceImage,
Intensity = 0.8f
};
]]></code>
</example>
<para>
You can chain your filters as well:
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var sepiaFilter = new CISepiaTone () {
Image = mySourceImage,
Intensity = .8f
};
var invert = new CIColorInvert () {
Image = sepia.OutputImage
};
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIFilter_Class/index.html">Apple documentation for <c>CIFilter</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,59 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFlashTransition">
<summary>A <see cref="T:CoreImage.CITransitionFilter" /> that presents a starburst-like flash.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
// Create a CIFlashTransition filter with the input image
var flash_transition = new CIFlashTransition ()
{
Image = heron,
TargetImage = clouds,
Time = 0.8f
};
// Get the altered image from the filter
var output = flash_transition.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/flash_transition.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFourfoldReflectedTile">
<summary>A <see cref="T:CoreImage.CITileFilter" /> that applies 4-way reflected symmetry.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIFourfoldReflectedTile filter with the input image
var fourfold_reflected_tile = new CIFourfoldReflectedTile () {
Image = flower
};
// Get the altered image from the filter
var output = new CICrop {
Image = fourfold_reflected_tile.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/fourfold_reflected_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFourfoldRotatedTile">
<summary>A <see cref="T:CoreImage.CITileFilter" /> that rotates the source image in 90-degree increments.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIFourfoldRotatedTile filter with the input image
var fourfold_rotated_tile = new CIFourfoldRotatedTile () {
Image = flower
};
// Get the altered image from the filter
var output = new CICrop {
Image = fourfold_rotated_tile.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/fourfold_rotated_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,54 @@
<Documentation>
<Docs DocId="T:CoreImage.CIFourfoldTranslatedTile">
<summary>A <see cref="T:CoreImage.CITileFilter" /> that applies four translations to the source image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIFourfoldTranslatedTile filter with the input image
var fourfold_translated_tile = new CIFourfoldTranslatedTile () {
Image = flower,
Center = new CIVector (100, 100),
Width = 150,
};
// Get the altered image from the filter
var output = new CICrop {
Image = fourfold_translated_tile.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/fourfold_translated_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,53 @@
<Documentation>
<Docs DocId="T:CoreImage.CIGammaAdjust">
<summary>The CIGammaAdjust CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create the GammaAdjust filter
var gammaAdjust = new CIGammaAdjust ()
{
Image = flower,
Power = 3F, // Default value: 0.75
};
// Get the Gamma Adjusted image
var output = gammaAdjust.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/GammaAdjust.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,51 @@
<Documentation>
<Docs DocId="T:CoreImage.CIGaussianBlur">
<summary>Applies a Gaussian blur.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
// Create a CIGaussianBlur filter with the input image
var gaussian_blur = new CIGaussianBlur ()
{
Image = clouds,
Radius = 3f,
};
// Get the altered image from the filter
var output = gaussian_blur.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/gaussian_blur.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIGaussianGradient">
<summary>Generates a gradient that fades via a 2D Gaussian distribution</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the Vector that represents the Center of the gradient
var centerVector = new CIVector (100, 100); // Default is [150 150]
// Create the two colors to form the Gradient.
var color1 = CIColor.FromRgba (1, 0, 1, 1);
var color0 = CIColor.FromRgba (0, 1, 1, 1);
// Constructor the actual GaussianGradient filter
var gaussGradient = new CIGaussianGradient ()
{
Center = centerVector,
Color0 = color0,
Color1 = color1,
Radius = 280f // Default is 300
};
// The Generator Filters need to be cropped before they can be displayed
var crop = new CICrop ()
{
Image = gaussGradient.OutputImage,
// Create the Bounds based on the Size of the application Window. (UIWindow)
Rectangle = new CIVector (0, 0, window.Bounds.Width, window.Bounds.Height)
};
// Get the Final Cropped Image
var output = crop.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/GaussianGradient.png" alt="Result of applying the filter." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIGlideReflectedTile">
<summary>A <see cref="T:CoreImage.CITileFilter" /> that translates and smears the source image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIGlideReflectedTile filter with the input image
var glide_reflected_tile = new CIGlideReflectedTile () {
Image = flower
};
// Get the altered image from the filter
var output = new CICrop {
Image = glide_reflected_tile.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/glide_reflected_tile.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CIGloom">
<summary>A <see cref="T:CoreImage.CIFilter" /> that dulls the highlights of the source image.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIGloom filter with the input image
var gloom = new CIGloom () {
Image = flower
};
// Get the altered image from the filter
var output = gloom.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/gloom.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHardLightBlendMode">
<summary>The CIHardLightBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the HardLightBlend filter
var hardLightBlend = new CIHardLightBlendMode ()
{
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the filter.
var output = hardLightBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/HardLight.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHatchedScreen">
<summary>A <see cref="T:CoreImage.CIScreenFilter" /> that filters via a hatched halftone pattern.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIHatchedScreen filter with the input image
var hatched_screen = new CIHatchedScreen () {
Image = flower
};
// Get the altered image from the filter
var output = hatched_screen.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/hatched_screen.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHighlightShadowAdjust">
<summary>The CIHighlightShadowAdjust CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file.
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Construct the HighlightShadowAdjust filter
var shadowAdjust = new CIHighlightShadowAdjust ()
{
Image = flower,
HighlightAmount = .75F, // Default is 1
ShadowAmount = 1.5F // Default is 0
};
// Get the adjusted image
var output = shadowAdjust.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/HiLightShadow.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHoleDistortion">
<summary>A <see cref="T:CoreImage.CIDistortionFilter" /> that distorts pixels around a circular area.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CIHoleDistortion filter with the input image
var hole_distortion = new CIHoleDistortion () {
Image = heron,
Radius = 85
};
// Get the altered image from the filter
var output = hole_distortion.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/hole_distortion.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHueAdjust">
<summary>The CIHueAdjust CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create a CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Constructor the HueAdjust filter
var hueAdjust = new CIHueAdjust() {
Image = flower,
Angle = 1F // Default is 0
};
// Get the adjusted Image from the filter.
var output = hueAdjust.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/HueAdjust.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIHueBlendMode">
<summary>The CIHueBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create some CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Construct the HueBlend filter
var hueBlend = new CIHueBlendMode()
{
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the Filter
var output = hueBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/HueBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,76 @@
<Documentation>
<Docs DocId="T:CoreImage.CIImage">
<summary>Represents a set of instructions to create an image for use by CoreImage.</summary>
<remarks>
<para>
Unlike CoreGraphics images (<see cref="T:CoreGraphics.CGImage" />) that are objects
that hold the actual image data to be processed, CIImages
represents a set of instructions to obtain an image. These
recipes are used during the CoreImage filtering, analysis or rendering
pipeline to actually create the bitmap representation.
</para>
<para>
CIImages are either the result of loading an image from disk,
an NSData array, a CoreVideo buffer a CoreGraphics image, or
the result of processing an image pipeline by CoreImage.
</para>
<para>
CIImages for example are used when chaining various filters
(<see cref="T:CoreImage.CIFilter" />) together and
only their abstract representation is passed between the
output of one filter and the input to another one. The image
might not even reside in the main memory, it could reside
entirely on the GPU space as an intermediate step between two
filters.
</para>
<format type="text/html">
<h2>Auto Enhancement Filters</h2>
</format>
<para>
The <see cref="M:CoreImage.CIImage.GetAutoAdjustmentFilters(CoreImage.CIAutoAdjustmentFilterOptions)" />
method can be used to obtain a list of CIImage filters that
can be used to correct various problems found in photos.
</para>
<para>
These typically include:
<list type="bullet"><item><term>
Adjusts the shadow details (using the <see cref="T:CoreImage.CIHighlightShadowAdjust" /> filter).
</term></item><item><term>
Adjusts the image contrast (using the <see cref="T:CoreImage.CIToneCurve" /> filter).
</term></item><item><term>
Adjusts the image saturation (using the <see cref="T:CoreImage.CIVibrance" /> filter).
</term></item><item><term>
Adjusts the face color balance, adjust the tone of the skin (using the <see cref="T:CoreImage.CIFaceBalance" /> filter).
</term></item><item><term>
Red Eye Correction (using the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Core%20Image%20CIRed%20Eye%20Correction&amp;scope=Xamarin" title="T:CoreImage.CIRedEyeCorrection">T:CoreImage.CIRedEyeCorrection</a></format> filter).
</term></item></list></para>
<para>
You can configure which kind of filters you want to get or
configure by setting the properties of an <see cref="T:CoreImage.CIAutoAdjustmentFilterOptions" />
instance and then calling the appropriate method.
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
void PrepareFixes (CIImage img)
{
var opt = new CIAutoAdjustmentFilterOptions () {
RedEye = true,
AutoAdjustCrop = true
};
foreach (var filter in img.GetAutoAdjustmentFilters (opt)) {
filter.Image = img;
img = filter.OutputImage;
}
}
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImage_Class/index.html">Apple documentation for <c>CIImage</c></related>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CILanczosScaleTransform">
<summary>A scaling transform that uses Lanczos resampling.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create a CILanczosScaleTransform filter with the input image
var lanczos_scale_transform = new CILanczosScaleTransform () {
Image = heron,
Scale = .5f
};
// Get the altered image from the filter
var output = lanczos_scale_transform.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/lanczos_scale_transform.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,52 @@
<Documentation>
<Docs DocId="T:CoreImage.CILightTunnel">
<summary>A <see cref="T:CoreImage.CIFilter" /> that creates a spiraling effect.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CILightTunnel filter with the input image
var light_tunel = new CILightTunnel () {
Image = flower
};
// Get the altered image from the filter
var output = new CICrop {
Image = light_tunel.OutputImage,
Rectangle = new CIVector (0, 0, 400, 300)
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/light_tunel.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,55 @@
<Documentation>
<Docs DocId="T:CoreImage.CILightenBlendMode">
<summary>The CILightenBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImage from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Construct the LightenBlend filter
var lightenBlend = new CILightenBlendMode() {
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the filter
var output = lightenBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/LightenBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CILineScreen">
<summary>A <see cref="T:CoreImage.CIScreenFilter" /> that simulates a halftone made of lines.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CILineScreen filter with the input image
var line_screen = new CILineScreen () {
Image = flower
};
// Get the altered image from the filter
var output = line_screen.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/line_screen.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,48 @@
<Documentation>
<Docs DocId="T:CoreImage.CILinearGradient">
<summary>A gradient that fades one color linearly into another.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var point0 = new CIVector(0, 0); // Default [0 0]
var point1 = new CIVector(250, 250); // Default [200 200]
var linearGrad = new CILinearGradient() {
Point0 = point0,
Point1 = point1,
Color0 = new CIColor (UIColor.Red),
Color1 = new CIColor (UIColor.Blue)
};
// The Generator Filters need to be cropped before they can be displayed
var crop = new CICrop () {
Image = linearGrad.OutputImage,
// Create the Bounds based on the Size of the application Window. (UIWindow)
Rectangle = new CIVector (0, 0, window.Bounds.Width, window.Bounds.Height)
};
// Get the final Generated image from the Crop filter
var output = crop.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/LinearGradient.png" alt="Result of applying the filter." />
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,49 @@
<Documentation>
<Docs DocId="T:CoreImage.CILinearToSRGBToneCurve">
<summary>A filter that maps color intensity from a linear gamma curve to the sRGB color space.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CILinearToSRGBToneCurve filter with the input image
var linear2Srgb_tone_curve = new CILinearToSRGBToneCurve () {
Image = flower
};
// Get the altered image from the filter
var output = linear2Srgb_tone_curve.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/linear2Srgb_tone_curve.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,55 @@
<Documentation>
<Docs DocId="T:CoreImage.CILuminosityBlendMode">
<summary>The CILuminosityBlendMode CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Create the LuminosityBlend filter
var luminosityBlend = new CILuminosityBlendMode() {
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the filter
var output = luminosityBlend.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/LuminosityBlend.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,69 @@
<Documentation>
<Docs DocId="T:CoreImage.CIMaskToAlpha">
<summary>A <see cref="T:CoreImage.CIFilter" /> that converts a grayscale image to an alpha mask.</summary>
<remarks>
<para>Black translates to 100% alpha (completely transparent) and white to 0%.</para>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIMaskToAlpha filter with the input image
var mask_to_alpha = new CIMaskToAlpha ()
{
Image = heron
};
// Get the altered image from the filter
var output = new CIBlendWithAlphaMask () {
BackgroundImage = clouds,
Image = flower,
Mask = mask_to_alpha.OutputImage
}.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image inputs:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/mask_to_alpha.png" alt="Result of applying the filter." />
</para>
<para>
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIMaximumComponent">
<summary>A <see cref="T:CoreImage.CIFilter" /> that creates a grayscale image from the maximum value of the RGB color values.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIMaximumComponent filter with the input image
var maximum_component = new CIMaximumComponent ()
{
Image = flower
};
// Get the altered image from the filter
var output = maximum_component.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/maximum_component.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,55 @@
<Documentation>
<Docs DocId="T:CoreImage.CIMaximumCompositing">
<summary>The CIMaximumCompositing CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Construct the MaximumComposite filter
var maxComposite = new CIMaximumCompositing() {
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the Filter
var output = maxComposite.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/MaxComposite.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,50 @@
<Documentation>
<Docs DocId="T:CoreImage.CIMinimumComponent">
<summary>A <see cref="T:CoreImage.CIFilter" /> that creates a grayscale image from the minimum component of the RGB values.</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create the CIImage from a file
CIImage flower = CIImage.FromCGImage (UIImage.FromFile ("flower.png").CGImage);
// Create a CIMinimumComponent filter with the input image
var minimum_component = new CIMinimumComponent ()
{
Image = flower
};
// Get the altered image from the filter
var output = minimum_component.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following image input:
</para>
<para>
<img href="~/CoreImage/_images/flower.png" alt="Photograph of a sunflower." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/minimum_component.png" alt="Result of applying the filter." />
</para>
<para>
"Flower" © 2012 Milica Sekulic, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Просмотреть файл

@ -0,0 +1,56 @@
<Documentation>
<Docs DocId="T:CoreImage.CIMinimumCompositing">
<summary>The CIMinimumCompositing CoreImage filter</summary>
<remarks>
<para>The following example shows this filter in use</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// Create CIImages from files.
CIImage clouds = CIImage.FromCGImage (UIImage.FromFile ("clouds.jpg").CGImage);
CIImage heron = CIImage.FromCGImage (UIImage.FromFile ("heron.jpg").CGImage);
// Construct the MinimumComposite filter
var minComposite = new CIMinimumCompositing()
{
Image = heron,
BackgroundImage = clouds
};
// Get the composite image from the filter.
var output = minComposite.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
With the following source:
</para>
<para>
<img href="~/CoreImage/_images/heron.jpg" alt="Photograph of a heron." />
</para>
<para>
<img href="~/CoreImage/_images/clouds.jpg" alt="Photograph of clouds and sunbeams." />
</para>
<para>
Produces the following output:
</para>
<para>
<img href="~/CoreImage/_images/MinimumComposite.png" alt="Result of applying the filter." />
</para>
<para>
"Sunrise near Atkeison Plateau" © 2012 Charles Atkeison, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
"canon" © 2012 cuatrok77 hernandez, used under a Creative Commons Attribution-ShareAlike license: https://creativecommons.org/licenses/by-sa/3.0/
</para>
</remarks>
</Docs>
</Documentation>

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше