backport commit 659ffaddb4
This commit is contained in:
Brian Wignall 2019-12-26 06:45:03 -05:00 коммит произвёл Alexander Alekhin
Родитель 5e2bcc9149
Коммит f9c514b391
70 изменённых файлов: 89 добавлений и 89 удалений

Просмотреть файл

@ -1078,8 +1078,8 @@ void cvCreateTrainingSamples( const char* filename,
icvPlaceDistortedSample( sample, inverse, maxintensitydev,
maxxangle, maxyangle, maxzangle,
0 /* nonzero means placing image without cut offs */,
0.0 /* nozero adds random shifting */,
0.0 /* nozero adds random scaling */,
0.0 /* nonzero adds random shifting */,
0.0 /* nonzero adds random scaling */,
&data );
if( showsamples )

Просмотреть файл

@ -45,7 +45,7 @@ protected:
};
std::vector<Feature> features;
cv::Mat normSum; //for nomalization calculation (L1 or L2)
cv::Mat normSum; //for normalization calculation (L1 or L2)
std::vector<cv::Mat> hist;
};
@ -70,7 +70,7 @@ inline float CvHOGEvaluator::Feature::calc( const std::vector<cv::Mat>& _hists,
const float *pnormSum = _normSum.ptr<float>((int)y);
normFactor = (float)(pnormSum[fastRect[0].p0] - pnormSum[fastRect[1].p1] - pnormSum[fastRect[2].p2] + pnormSum[fastRect[3].p3]);
res = (res > 0.001f) ? ( res / (normFactor + 0.001f) ) : 0.f; //for cutting negative values, which apper due to floating precision
res = (res > 0.001f) ? ( res / (normFactor + 0.001f) ) : 0.f; //for cutting negative values, which appear due to floating precision
return res;
}

Просмотреть файл

@ -145,7 +145,7 @@ no child, parent is contour-3. So array is [-1,-1,-1,3].
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family
hierarchy list. **It even tells, who is the grandpa, father, son, grandson and even beyond... :)**.
For examle, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
For example, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
result given by OpenCV and analyze it. Again, red letters give the contour number and green letters
give the hierarchy order.

Просмотреть файл

@ -17,7 +17,7 @@ In short, we found locations of some parts of an object in another cluttered ima
is sufficient to find the object exactly on the trainImage.
For that, we can use a function from calib3d module, ie **cv.findHomography()**. If we pass the set
of points from both the images, it will find the perpective transformation of that object. Then we
of points from both the images, it will find the perspective transformation of that object. Then we
can use **cv.perspectiveTransform()** to find the object. It needs atleast four correct points to
find the transformation.
@ -68,7 +68,7 @@ Now we set a condition that atleast 10 matches (defined by MIN_MATCH_COUNT) are
find the object. Otherwise simply show a message saying not enough matches are present.
If enough matches are found, we extract the locations of matched keypoints in both the images. They
are passed to find the perpective transformation. Once we get this 3x3 transformation matrix, we use
are passed to find the perspective transformation. Once we get this 3x3 transformation matrix, we use
it to transform the corners of queryImage to corresponding points in trainImage. Then we draw it.
@code{.py}
if len(good)>MIN_MATCH_COUNT:

Просмотреть файл

@ -28,7 +28,7 @@ If it is a greater than a threshold value, it is considered as a corner. If we p
![image](images/shitomasi_space.png)
From the figure, you can see that only when \f$\lambda_1\f$ and \f$\lambda_2\f$ are above a minimum value,
\f$\lambda_{min}\f$, it is conidered as a corner(green region).
\f$\lambda_{min}\f$, it is considered as a corner(green region).
Code
----

Просмотреть файл

@ -144,7 +144,7 @@ cv.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
### 7.b. Rotated Rectangle
Here, bounding rectangle is drawn with minimum area, so it considers the rotation also. The function
used is **cv.minAreaRect()**. It returns a Box2D structure which contains following detals - (
used is **cv.minAreaRect()**. It returns a Box2D structure which contains following details - (
center (x,y), (width, height), angle of rotation ). But to draw this rectangle, we need 4 corners of
the rectangle. It is obtained by the function **cv.boxPoints()**
@code{.py}

Просмотреть файл

@ -185,7 +185,7 @@ array([[[ 3, -1, 1, -1],
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family
hierarchy list. **It even tells, who is the grandpa, father, son, grandson and even beyond... :)**.
For examle, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
For example, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
result given by OpenCV and analyze it. Again, red letters give the contour number and green letters
give the hierarchy order.

Просмотреть файл

@ -381,7 +381,7 @@ Here is explained in detail the code for the real time application:
as not, there are false correspondences or also called *outliers*. The [Random Sample
Consensus](http://en.wikipedia.org/wiki/RANSAC) or *Ransac* is a non-deterministic iterative
method which estimate parameters of a mathematical model from observed data producing an
approximate result as the number of iterations increase. After appyling *Ransac* all the *outliers*
approximate result as the number of iterations increase. After applying *Ransac* all the *outliers*
will be eliminated to then estimate the camera pose with a certain probability to obtain a good
solution.

Просмотреть файл

@ -499,7 +499,7 @@ using the following OpenCV methods:
- the imwrite static method from the Highgui class to write an image to a file
- the GaussianBlur static method from the Imgproc class to apply to blur the original image
We're also going to use the Mat class which is returned from the imread method and accpeted as the
We're also going to use the Mat class which is returned from the imread method and accepted as the
main argument to both the GaussianBlur and the imwrite methods.
### Add an image to the project

Просмотреть файл

@ -10,7 +10,7 @@ In this tutorial,
- We will see the basics of face detection and eye detection using the Haar Feature-based Cascade Classifiers
- We will use the @ref cv::CascadeClassifier class to detect objects in a video stream. Particularly, we
will use the functions:
- @ref cv::CascadeClassifier::load to load a .xml classifier file. It can be either a Haar or a LBP classifer
- @ref cv::CascadeClassifier::load to load a .xml classifier file. It can be either a Haar or a LBP classifier
- @ref cv::CascadeClassifier::detectMultiScale to perform the detection.
Theory

Просмотреть файл

@ -168,7 +168,7 @@ Command line arguments of opencv_traincascade application grouped by purposes:
- `-w <sampleWidth>` : Width of training samples (in pixels). Must have exactly the same value as used during training samples creation (opencv_createsamples utility).
- `-h <sampleHeight>` : Height of training samples (in pixels). Must have exactly the same value as used during training samples creation (opencv_createsamples utility).
- Boosted classifer parameters:
- Boosted classifier parameters:
- `-bt <{DAB, RAB, LB, GAB(default)}>` : Type of boosted classifiers: DAB - Discrete AdaBoost, RAB - Real AdaBoost, LB - LogitBoost, GAB - Gentle AdaBoost.
- `-minHitRate <min_hit_rate>` : Minimal desired hit rate for each stage of the classifier. Overall hit rate may be estimated as (min_hit_rate ^ number_of_stages), @cite Viola04 §4.1.
- `-maxFalseAlarmRate <max_false_alarm_rate>` : Maximal desired false alarm rate for each stage of the classifier. Overall false alarm rate may be estimated as (max_false_alarm_rate ^ number_of_stages), @cite Viola04 §4.1.

Просмотреть файл

@ -43,7 +43,7 @@ VideoCapture can retrieve the following data:
- CAP_OPENNI_POINT_CLOUD_MAP - XYZ in meters (CV_32FC3)
- CAP_OPENNI_DISPARITY_MAP - disparity in pixels (CV_8UC1)
- CAP_OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1)
- CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not ocluded, not shaded etc.)
- CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not occluded, not shaded etc.)
(CV_8UC1)
-# data given from BGR image generator:

Просмотреть файл

@ -1218,7 +1218,7 @@ struct CV_EXPORTS_W_SIMPLE CirclesGridFinderParameters2 : public CirclesGridFind
CV_WRAP CirclesGridFinderParameters2();
CV_PROP_RW float squareSize; //!< Distance between two adjacent points. Used by CALIB_CB_CLUSTERING.
CV_PROP_RW float maxRectifiedDistance; //!< Max deviation from predicion. Used by CALIB_CB_CLUSTERING.
CV_PROP_RW float maxRectifiedDistance; //!< Max deviation from prediction. Used by CALIB_CB_CLUSTERING.
};
/** @brief Finds centers in the grid of circles.

Просмотреть файл

@ -48,7 +48,7 @@
#include <iterator>
/*
This is stright-forward port v3 of Matlab calibration engine by Jean-Yves Bouguet
This is straight-forward port v3 of Matlab calibration engine by Jean-Yves Bouguet
that is (in a large extent) based on the paper:
Z. Zhang. "A flexible new technique for camera calibration".
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000.

Просмотреть файл

@ -115,7 +115,7 @@ void CV_ChessboardDetectorBadArgTest::run( int /*start_from */)
img = cb.clone();
pattern_size = Size(2,2);
errors += run_test_case( CV_StsOutOfRange, "Invlid pattern size" );
errors += run_test_case( CV_StsOutOfRange, "Invalid pattern size" );
pattern_size = cbg.cornersSize();
cb.convertTo(img, CV_32F);

Просмотреть файл

@ -1309,7 +1309,7 @@ CVAPI(void) cvMulTransposed( const CvArr* src, CvArr* dst, int order,
const CvArr* delta CV_DEFAULT(NULL),
double scale CV_DEFAULT(1.) );
/** Tranposes matrix. Square matrices can be transposed in-place */
/** Transposes matrix. Square matrices can be transposed in-place */
CVAPI(void) cvTranspose( const CvArr* src, CvArr* dst );
#define cvT cvTranspose

Просмотреть файл

@ -569,7 +569,7 @@ inline v_int64x4 v256_blend(const v_int64x4& a, const v_int64x4& b)
{ return v_int64x4(v256_blend<m>(v_uint64x4(a.val), v_uint64x4(b.val)).val); }
// shuffle
// todo: emluate 64bit
// todo: emulate 64bit
#define OPENCV_HAL_IMPL_AVX_SHUFFLE(_Tpvec, intrin) \
template<int m> \
inline _Tpvec v256_shuffle(const _Tpvec& a) \

Просмотреть файл

@ -73,7 +73,7 @@ implemented as a structure based on a one SIMD register.
- cv::v_uint8x16 and cv::v_int8x16: sixteen 8-bit integer values (unsigned/signed) - char
- cv::v_uint16x8 and cv::v_int16x8: eight 16-bit integer values (unsigned/signed) - short
- cv::v_uint32x4 and cv::v_int32x4: four 32-bit integer values (unsgined/signed) - int
- cv::v_uint32x4 and cv::v_int32x4: four 32-bit integer values (unsigned/signed) - int
- cv::v_uint64x2 and cv::v_int64x2: two 64-bit integer values (unsigned/signed) - int64
- cv::v_float32x4: four 32-bit floating point values (signed) - float
- cv::v_float64x2: two 64-bit floating point values (signed) - double

Просмотреть файл

@ -1805,7 +1805,7 @@ inline v_float32x4 v_broadcast_element(const v_float32x4& a)
return v_setall_f32(v_extract_n<i>(a));
}
////// FP16 suport ///////
////// FP16 support ///////
#if CV_FP16
inline v_float32x4 v_load_expand(const float16_t* ptr)
{

Просмотреть файл

@ -94,7 +94,7 @@ struct v_uint16x8
}
ushort get0() const
{
return (ushort)wasm_i16x8_extract_lane(val, 0); // wasm_u16x8_extract_lane() unimplemeted yet
return (ushort)wasm_i16x8_extract_lane(val, 0); // wasm_u16x8_extract_lane() unimplemented yet
}
v128_t val;

Просмотреть файл

@ -50,7 +50,7 @@ typedef double v1f64 __attribute__ ((vector_size(8), aligned(8)));
#define msa_ld1q_f32(__a) ((v4f32)__builtin_msa_ld_w(__a, 0))
#define msa_ld1q_f64(__a) ((v2f64)__builtin_msa_ld_d(__a, 0))
/* Store 64bits vector elments values to the given memory address. */
/* Store 64bits vector elements values to the given memory address. */
#define msa_st1_s8(__a, __b) (*((v8i8*)(__a)) = __b)
#define msa_st1_s16(__a, __b) (*((v4i16*)(__a)) = __b)
#define msa_st1_s32(__a, __b) (*((v2i32*)(__a)) = __b)
@ -377,7 +377,7 @@ typedef double v1f64 __attribute__ ((vector_size(8), aligned(8)));
})
/* Right shift elements in a 128 bits vector by an immediate value, saturate the results and them in a 64 bits vector.
Input is signed and outpus is unsigned. */
Input is signed and output is unsigned. */
#define msa_qrshrun_n_s16(__a, __b) \
({ \
v8i16 __d = __builtin_msa_srlri_h(__builtin_msa_max_s_h(__builtin_msa_fill_h(0), (v8i16)(__a)), (int)(__b)); \

Просмотреть файл

@ -62,7 +62,7 @@ static String getDeviceTypeString(const cv::ocl::Device& device)
}
}
return "unkown";
return "unknown";
}
} // namespace

Просмотреть файл

@ -165,7 +165,7 @@ public:
/** @brief Sets the initial step that will be used in downhill simplex algorithm.
Step, together with initial point (givin in DownhillSolver::minimize) are two `n`-dimensional
Step, together with initial point (given in DownhillSolver::minimize) are two `n`-dimensional
vectors that are used to determine the shape of initial simplex. Roughly said, initial point
determines the position of a simplex (it will become simplex's centroid), while step determines the
spread (size in each dimension) of a simplex. To be more precise, if \f$s,x_0\in\mathbb{R}^n\f$ are

Просмотреть файл

@ -317,7 +317,7 @@ VSX_IMPL_1RG(vec_udword2, wi, vec_float4, wf, xvcvspuxds, vec_ctulo)
* Also there's already an open bug https://bugs.llvm.org/show_bug.cgi?id=31837
*
* So we're not able to use inline asm and only use built-in functions that CLANG supports
* and use __builtin_convertvector if clang missng any of vector conversions built-in functions
* and use __builtin_convertvector if clang missing any of vector conversions built-in functions
*
* todo: clang asm template bug is fixed, need to reconsider the current workarounds.
*/
@ -491,7 +491,7 @@ VSX_IMPL_CONV_EVEN_2_4(vec_uint4, vec_double2, vec_ctu, vec_ctuo)
// Only for Eigen!
/*
* changing behavior of conversion intrinsics for gcc has effect on Eigen
* so we redfine old behavior again only on gcc, clang
* so we redefine old behavior again only on gcc, clang
*/
#if !defined(__clang__) || __clang_major__ > 4
// ignoring second arg since Eigen only truncates toward zero

Просмотреть файл

@ -250,7 +250,7 @@ cvInitMatNDHeader( CvMatND* mat, int dims, const int* sizes,
for( int i = dims - 1; i >= 0; i-- )
{
if( sizes[i] < 0 )
CV_Error( CV_StsBadSize, "one of dimesion sizes is non-positive" );
CV_Error( CV_StsBadSize, "one of dimension sizes is non-positive" );
mat->dim[i].size = sizes[i];
if( step > INT_MAX )
CV_Error( CV_StsOutOfRange, "The array is too big" );
@ -545,7 +545,7 @@ cvCreateSparseMat( int dims, const int* sizes, int type )
for( i = 0; i < dims; i++ )
{
if( sizes[i] <= 0 )
CV_Error( CV_StsBadSize, "one of dimesion sizes is non-positive" );
CV_Error( CV_StsBadSize, "one of dimension sizes is non-positive" );
}
CvSparseMat* arr = (CvSparseMat*)cvAlloc(sizeof(*arr)+MAX(0,dims-CV_MAX_DIM)*sizeof(arr->size[0]));

Просмотреть файл

@ -53,7 +53,7 @@ cvtabs_32f( const _Ts* src, size_t sstep, _Td* dst, size_t dstep,
}
}
// variant for convrsions 16f <-> ... w/o unrolling
// variant for conversions 16f <-> ... w/o unrolling
template<typename _Ts, typename _Td> inline void
cvtabs1_32f( const _Ts* src, size_t sstep, _Td* dst, size_t dstep,
Size size, float a, float b )
@ -123,7 +123,7 @@ cvt_32f( const _Ts* src, size_t sstep, _Td* dst, size_t dstep,
}
}
// variant for convrsions 16f <-> ... w/o unrolling
// variant for conversions 16f <-> ... w/o unrolling
template<typename _Ts, typename _Td> inline void
cvt1_32f( const _Ts* src, size_t sstep, _Td* dst, size_t dstep,
Size size, float a, float b )

Просмотреть файл

@ -77,7 +77,7 @@ Replaced y(1,ndim,0.0) ------> y(1,ndim+1,0.0)
***********************************************************************************************************************************
The code below was used in tesing the source code.
The code below was used in testing the source code.
Created by @SareeAlnaghy
#include <iostream>

Просмотреть файл

@ -1592,7 +1592,7 @@ public:
{
TlsAbstraction* tls = getTlsAbstraction();
if (NULL == tls)
return; // TLS signleton is not available (terminated)
return; // TLS singleton is not available (terminated)
ThreadData *pTD = tlsValue == NULL ? (ThreadData*)tls->getData() : (ThreadData*)tlsValue;
if (pTD == NULL)
return; // no OpenCV TLS data for this thread
@ -1683,7 +1683,7 @@ public:
TlsAbstraction* tls = getTlsAbstraction();
if (NULL == tls)
return NULL; // TLS signleton is not available (terminated)
return NULL; // TLS singleton is not available (terminated)
ThreadData* threadData = (ThreadData*)tls->getData();
if(threadData && threadData->slots.size() > slotIdx)
@ -1719,7 +1719,7 @@ public:
TlsAbstraction* tls = getTlsAbstraction();
if (NULL == tls)
return; // TLS signleton is not available (terminated)
return; // TLS singleton is not available (terminated)
ThreadData* threadData = (ThreadData*)tls->getData();
if(!threadData)

Просмотреть файл

@ -134,7 +134,7 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN
virtual void setOutShape(const MatShape &outTailShape = MatShape()) = 0;
/** @deprecated Use flag `produce_cell_output` in LayerParams.
* @brief Specifies either interpret first dimension of input blob as timestamp dimenion either as sample.
* @brief Specifies either interpret first dimension of input blob as timestamp dimension either as sample.
*
* If flag is set to true then shape of input blob will be interpreted as [`T`, `N`, `[data dims]`] where `T` specifies number of timestamps, `N` is number of independent streams.
* In this case each forward() call will iterate through `T` timestamps and update layer's state `T` times.

Просмотреть файл

@ -622,7 +622,7 @@ void InfEngineNgraphNet::forward(const std::vector<Ptr<BackendWrapper> >& outBlo
try {
wrapper->outProms[processedOutputs].setException(std::current_exception());
} catch(...) {
CV_LOG_ERROR(NULL, "DNN: Exception occured during async inference exception propagation");
CV_LOG_ERROR(NULL, "DNN: Exception occurred during async inference exception propagation");
}
}
}
@ -635,7 +635,7 @@ void InfEngineNgraphNet::forward(const std::vector<Ptr<BackendWrapper> >& outBlo
try {
wrapper->outProms[processedOutputs].setException(e);
} catch(...) {
CV_LOG_ERROR(NULL, "DNN: Exception occured during async inference exception propagation");
CV_LOG_ERROR(NULL, "DNN: Exception occurred during async inference exception propagation");
}
}
}

Просмотреть файл

@ -116,7 +116,7 @@ message AttributeProto {
// The type field MUST be present for this version of the IR.
// For 0.0.1 versions of the IR, this field was not defined, and
// implementations needed to use has_field hueristics to determine
// implementations needed to use has_field heuristics to determine
// which value field was in use. For IR_VERSION 0.0.2 or later, this
// field MUST be set and match the f|i|s|t|... field in use. This
// change was made to accommodate proto3 implementations.
@ -323,7 +323,7 @@ message TensorProto {
// For float and complex64 values
// Complex64 tensors are encoded as a single array of floats,
// with the real components appearing in odd numbered positions,
// and the corresponding imaginary component apparing in the
// and the corresponding imaginary component appearing in the
// subsequent even numbered position. (e.g., [1.0 + 2.0i, 3.0 + 4.0i]
// is encoded as [1.0, 2.0 ,3.0 ,4.0]
// When this field is present, the data_type field MUST be FLOAT or COMPLEX64.
@ -373,7 +373,7 @@ message TensorProto {
// For double
// Complex64 tensors are encoded as a single array of doubles,
// with the real components appearing in odd numbered positions,
// and the corresponding imaginary component apparing in the
// and the corresponding imaginary component appearing in the
// subsequent even numbered position. (e.g., [1.0 + 2.0i, 3.0 + 4.0i]
// is encoded as [1.0, 2.0 ,3.0 ,4.0]
// When this field is present, the data_type field MUST be DOUBLE or COMPLEX128

Просмотреть файл

@ -385,7 +385,7 @@ code which is distributed under GPL.
class CV_EXPORTS_W MSER : public Feature2D
{
public:
/** @brief Full consturctor for %MSER detector
/** @brief Full constructor for %MSER detector
@param _delta it compares \f$(size_{i}-size_{i-delta})/size_{i-delta}\f$
@param _min_area prune the area which smaller than minArea

Просмотреть файл

@ -36,7 +36,7 @@ void image_derivatives_scharr(const cv::Mat& src, cv::Mat& dst, int xorder, int
// Nonlinear diffusion filtering scalar step
void nld_step_scalar(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float stepsize);
// For non-maxima suppresion
// For non-maxima suppression
bool check_maximum_neighbourhood(const cv::Mat& img, int dsize, float value, int row, int col, bool same_img);
// Image downsampling

Просмотреть файл

@ -983,7 +983,7 @@ extractMSER_8uC3( const Mat& src,
double s = (double)(lr->size-lr->sizei)/(lr->dt-lr->di);
if ( s < lr->s )
{
// skip the first one and check stablity
// skip the first one and check stability
if ( i > lr->reinit+1 && MSCRStableCheck( lr, params ) )
{
if ( lr->tmsr == NULL )

Просмотреть файл

@ -131,7 +131,7 @@ float optimizeSimplexDownhill(T* points, int n, F func, float* vals = NULL )
}
if (val_r<vals[0]) {
// value is smaller than smalest in simplex
// value is smaller than smallest in simplex
// expand some more to see if it drops further
for (int i=0; i<n; ++i) {

Просмотреть файл

@ -1184,7 +1184,7 @@ CVAPI(CvScalar) cvColorToScalar( double packed_color, int arrtype );
/** @brief Returns the polygon points which make up the given ellipse.
The ellipse is define by the box of size 'axes' rotated 'angle' around the 'center'. A partial
sweep of the ellipse arc can be done by spcifying arc_start and arc_end to be something other than
sweep of the ellipse arc can be done by specifying arc_start and arc_end to be something other than
0 and 360, respectively. The input array 'pts' must be large enough to hold the result. The total
number of points stored into 'pts' is returned by this function.
@see cv::ellipse2Poly

Просмотреть файл

@ -630,7 +630,7 @@ approxPolyDP_( const Point_<T>* src_contour, int count0, Point_<T>* dst_contour,
WRITE_PT( src_contour[count-1] );
// last stage: do final clean-up of the approximated contour -
// remove extra points on the [almost] stright lines.
// remove extra points on the [almost] straight lines.
is_closed = is_closed0;
count = new_count;
pos = is_closed ? count - 1 : 0;

Просмотреть файл

@ -776,7 +776,7 @@ cv::RotatedRect cv::fitEllipseDirect( InputArray _points )
namespace cv
{
// Calculates bounding rectagnle of a point set or retrieves already calculated
// Calculates bounding rectangle of a point set or retrieves already calculated
static Rect pointSetBoundingRect( const Mat& points )
{
int npoints = points.checkVector(2);
@ -1392,7 +1392,7 @@ cvFitEllipse2( const CvArr* array )
return cvBox2D(cv::fitEllipse(points));
}
/* Calculates bounding rectagnle of a point set or retrieves already calculated */
/* Calculates bounding rectangle of a point set or retrieves already calculated */
CV_IMPL CvRect
cvBoundingRect( CvArr* array, int update )
{

Просмотреть файл

@ -325,7 +325,7 @@ void CV_ApproxPolyTest::run( int /*start_from*/ )
if( DstSeq == NULL )
{
ts->printf( cvtest::TS::LOG,
"cvApproxPoly returned NULL for contour #%d, espilon = %g\n", i, Eps );
"cvApproxPoly returned NULL for contour #%d, epsilon = %g\n", i, Eps );
code = cvtest::TS::FAIL_INVALID_OUTPUT;
goto _exit_;
} // if( DstSeq == NULL )

Просмотреть файл

@ -60,7 +60,7 @@ namespace opencv_test { namespace {
// 6 - partial intersection, rectangle on top of different size
// 7 - full intersection, rectangle fully enclosed in the other
// 8 - partial intersection, rectangle corner just touching. point contact
// 9 - partial intersetion. rectangle side by side, line contact
// 9 - partial intersection. rectangle side by side, line contact
static void compare(const std::vector<Point2f>& test, const std::vector<Point2f>& target)
{

Просмотреть файл

@ -40,7 +40,7 @@ foreach(file ${seed_project_files_rel})
endforeach()
list(APPEND depends gen_opencv_java_source "${OPENCV_DEPHELPER}/gen_opencv_java_source")
ocv_copyfiles_add_target(${the_module}_android_source_copy JAVA_SRC_COPY "Copy Java(Andoid SDK) source files" ${depends})
ocv_copyfiles_add_target(${the_module}_android_source_copy JAVA_SRC_COPY "Copy Java(Android SDK) source files" ${depends})
file(REMOVE "${OPENCV_DEPHELPER}/${the_module}_android_source_copy") # force rebuild after CMake run
set(depends ${the_module}_android_source_copy "${OPENCV_DEPHELPER}/${the_module}_android_source_copy")

Просмотреть файл

@ -232,7 +232,7 @@ public abstract class CameraBridgeViewBase extends SurfaceView implements Surfac
/**
* This method is provided for clients, so they can disable camera connection and stop
* the delivery of frames even though the surface view itself is not destroyed and still stays on the scren
* the delivery of frames even though the surface view itself is not destroyed and still stays on the screen
*/
public void disableView() {
synchronized(mSyncObject) {

Просмотреть файл

@ -32,4 +32,4 @@ To run performance tests, please launch a local web server in <build_dir>/bin fo
Navigate the web browser to the kernel page you want to test, like http://localhost:8080/perf/imgproc/cvtcolor.html.
You can input the paramater, and then click the `Run` button to run the specific case, or it will run all the cases.
You can input the parameter, and then click the `Run` button to run the specific case, or it will run all the cases.

Просмотреть файл

@ -1679,7 +1679,7 @@ public:
/** @brief This function returns the trained parameters arranged across rows.
For a two class classifcation problem, it returns a row matrix. It returns learnt parameters of
For a two class classification problem, it returns a row matrix. It returns learnt parameters of
the Logistic Regression as a matrix of type CV_32F.
*/
CV_WRAP virtual Mat get_learnt_thetas() const = 0;

Просмотреть файл

@ -1,5 +1,5 @@
#!/usr/bin/env python
"""Algorithm serializaion test."""
"""Algorithm serialization test."""
import tempfile
import os
import cv2 as cv

Просмотреть файл

@ -1,5 +1,5 @@
#!/usr/bin/env python
""""Core serializaion tests."""
""""Core serialization tests."""
import tempfile
import os
import cv2 as cv

Просмотреть файл

@ -332,14 +332,14 @@ finds two best matches for each feature and leaves the best one only if the
ratio between descriptor distances is greater than the threshold match_conf.
Unlike cv::detail::BestOf2NearestMatcher this matcher uses affine
transformation (affine trasformation estimate will be placed in matches_info).
transformation (affine transformation estimate will be placed in matches_info).
@sa cv::detail::FeaturesMatcher cv::detail::BestOf2NearestMatcher
*/
class CV_EXPORTS AffineBestOf2NearestMatcher : public BestOf2NearestMatcher
{
public:
/** @brief Constructs a "best of 2 nearest" matcher that expects affine trasformation
/** @brief Constructs a "best of 2 nearest" matcher that expects affine transformation
between images
@param full_affine whether to use full affine transformation with 6 degress of freedom or reduced

Просмотреть файл

@ -11367,7 +11367,7 @@ void UniversalTersePrint(const T& value, ::std::ostream* os) {
// NUL-terminated string.
template <typename T>
void UniversalPrint(const T& value, ::std::ostream* os) {
// A workarond for the bug in VC++ 7.1 that prevents us from instantiating
// A workaround for the bug in VC++ 7.1 that prevents us from instantiating
// UniversalPrinter with T directly.
typedef T T1;
UniversalPrinter<T1>::Print(value, os);

Просмотреть файл

@ -94,11 +94,11 @@ class Aapt(Tool):
# get test instrumentation info
instrumentation_tag = [t for t in tags if t.startswith("instrumentation ")]
if not instrumentation_tag:
raise Err("Can not find instrumentation detials in: %s", exe)
raise Err("Can not find instrumentation details in: %s", exe)
res.pkg_runner = re.search(r"^[ ]+A: android:name\(0x[0-9a-f]{8}\)=\"(?P<runner>.*?)\" \(Raw: \"(?P=runner)\"\)\r?$", instrumentation_tag[0], flags=re.MULTILINE).group("runner")
res.pkg_target = re.search(r"^[ ]+A: android:targetPackage\(0x[0-9a-f]{8}\)=\"(?P<pkg>.*?)\" \(Raw: \"(?P=pkg)\"\)\r?$", instrumentation_tag[0], flags=re.MULTILINE).group("pkg")
if not res.pkg_name or not res.pkg_runner or not res.pkg_target:
raise Err("Can not find instrumentation detials in: %s", exe)
raise Err("Can not find instrumentation details in: %s", exe)
return res

Просмотреть файл

@ -452,7 +452,7 @@ int BadArgTest::run_test_case( int expected_code, const string& _descr )
{
thrown = true;
if (e.code != expected_code &&
e.code != cv::Error::StsError && e.code != cv::Error::StsAssert // Exact error codes support will be dropped. Checks should provide proper text messages intead.
e.code != cv::Error::StsError && e.code != cv::Error::StsAssert // Exact error codes support will be dropped. Checks should provide proper text messages instead.
)
{
ts->printf(TS::LOG, "%s (test case #%d): the error code %d is different from the expected %d\n",

Просмотреть файл

@ -110,7 +110,7 @@ public:
//set parameters
// N - the number of samples stored in memory per model
nN = defaultNsamples;
//kNN - k nearest neighbour - number on NN for detcting background - default K=[0.1*nN]
//kNN - k nearest neighbour - number on NN for detecting background - default K=[0.1*nN]
nkNN=MAX(1,cvRound(0.1*nN*3+0.40));
//Tb - Threshold Tb*kernelwidth
@ -292,7 +292,7 @@ protected:
//less important parameters - things you might change but be careful
////////////////////////
int nN;//totlal number of samples
int nkNN;//number on NN for detcting background - default K=[0.1*nN]
int nkNN;//number on NN for detecting background - default K=[0.1*nN]
//shadow detection parameters
bool bShadowDetection;//default 1 - do shadow detection

Просмотреть файл

@ -181,7 +181,7 @@ public:
//! computes a background image which are the mean of all background gaussians
virtual void getBackgroundImage(OutputArray backgroundImage) const CV_OVERRIDE;
//! re-initiaization method
//! re-initialization method
void initialize(Size _frameSize, int _frameType)
{
frameSize = _frameSize;

Просмотреть файл

@ -319,8 +319,8 @@ enum
CV_CAP_PROP_XI_COOLING = 466, // Start camera cooling.
CV_CAP_PROP_XI_TARGET_TEMP = 467, // Set sensor target temperature for cooling.
CV_CAP_PROP_XI_CHIP_TEMP = 468, // Camera sensor temperature
CV_CAP_PROP_XI_HOUS_TEMP = 469, // Camera housing tepmerature
CV_CAP_PROP_XI_HOUS_BACK_SIDE_TEMP = 590, // Camera housing back side tepmerature
CV_CAP_PROP_XI_HOUS_TEMP = 469, // Camera housing temperature
CV_CAP_PROP_XI_HOUS_BACK_SIDE_TEMP = 590, // Camera housing back side temperature
CV_CAP_PROP_XI_SENSOR_BOARD_TEMP = 596, // Camera sensor board temperature
CV_CAP_PROP_XI_CMS = 470, // Mode of color management system.
CV_CAP_PROP_XI_APPLY_CMS = 471, // Enable applying of CMS profiles to xiGetImage (see XI_PRM_INPUT_CMS_PROFILE, XI_PRM_OUTPUT_CMS_PROFILE).

Просмотреть файл

@ -299,7 +299,7 @@ bool CvCaptureCAM_Aravis::grabFrame()
size_t buffer_size;
framebuffer = (void*)arv_buffer_get_data (arv_buffer, &buffer_size);
// retrieve image size properites
// retrieve image size properties
arv_buffer_get_image_region (arv_buffer, &xoffset, &yoffset, &width, &height);
// retrieve image ID set by camera

Просмотреть файл

@ -1293,7 +1293,7 @@ bool CvVideoWriter_AVFoundation::writeFrame(const IplImage* iplimage) {
colorSpace, kCGImageAlphaLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
//CGImage -> CVPixelBufferRef coversion
//CGImage -> CVPixelBufferRef conversion
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef cfData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferCreateWithBytes(NULL,

Просмотреть файл

@ -805,7 +805,7 @@ bool CvCaptureFile::setupReadingAt(CMTime position) {
if (mMode == CV_CAP_MODE_BGR || mMode == CV_CAP_MODE_RGB) {
// For CV_CAP_MODE_BGR, read frames as BGRA (AV Foundation's YUV->RGB conversion is slightly faster than OpenCV's CV_YUV2BGR_YV12)
// kCVPixelFormatType_32ABGR is reportedly faster on OS X, but OpenCV doesn't have a CV_ABGR2BGR conversion.
// kCVPixelFormatType_24RGB is significanly slower than kCVPixelFormatType_32BGRA.
// kCVPixelFormatType_24RGB is significantly slower than kCVPixelFormatType_32BGRA.
pixelFormat = kCVPixelFormatType_32BGRA;
mFormat = CV_8UC3;
} else if (mMode == CV_CAP_MODE_GRAY) {
@ -1323,7 +1323,7 @@ bool CvVideoWriter_AVFoundation::writeFrame(const IplImage* iplimage) {
colorSpace, kCGImageAlphaLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
//CGImage -> CVPixelBufferRef coversion
//CGImage -> CVPixelBufferRef conversion
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef cfData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferCreateWithBytes(NULL,

Просмотреть файл

@ -1045,7 +1045,7 @@ bool GStreamerCapture::open(const String &filename_)
* \return property value
*
* There are two ways the properties can be retrieved. For seek-based properties we can query the pipeline.
* For frame-based properties, we use the caps of the lasst receivef sample. This means that some properties
* For frame-based properties, we use the caps of the last receivef sample. This means that some properties
* are not available until a first frame was received
*/
double GStreamerCapture::getProperty(int propId) const

Просмотреть файл

@ -46,7 +46,7 @@ if (APPLE_FRAMEWORK AND BUILD_SHARED_LIBS)
set (CMAKE_INSTALL_NAME_DIR "@rpath")
endif()
# Hidden visibilty is required for cxx on iOS
# Hidden visibility is required for cxx on iOS
set (no_warn "-Wno-unused-function -Wno-overloaded-virtual")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${no_warn}")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++ -fvisibility=hidden -fvisibility-inlines-hidden ${no_warn}")

Просмотреть файл

@ -4,7 +4,7 @@
# Toolchains with 'img' in the name are for MIPS R6 instruction sets.
# It is recommended to use cmake-gui application for build scripts configuration and generation:
# 1. Run cmake-gui
# 2. Specifiy toolchain file for cross-compiling, mips32r5el-gnu.toolchian.cmake or mips64r6el-gnu.toolchain.cmake
# 2. Specify toolchain file for cross-compiling, mips32r5el-gnu.toolchian.cmake or mips64r6el-gnu.toolchain.cmake
# can be selected.
# 3. Configure and Generate makefiles.
# 4. make -j4 & make install

Просмотреть файл

@ -4,7 +4,7 @@
# Toolchains with 'img' in the name are for MIPS R6 instruction sets.
# It is recommended to use cmake-gui for build scripts configuration and generation:
# 1. Run cmake-gui
# 2. Specifiy toolchain file mips32r5el-gnu.toolchian.cmake for cross-compiling.
# 2. Specify toolchain file mips32r5el-gnu.toolchian.cmake for cross-compiling.
# 3. Configure and Generate makefiles.
# 4. make -j4 & make install
# ----------------------------------------------------------------------------------------------

Просмотреть файл

@ -4,7 +4,7 @@
# Toolchains with 'img' in the name are for MIPS R6 instruction sets.
# It is recommended to use cmake-gui for build scripts configuration and generation:
# 1. Run cmake-gui
# 2. Specifiy toolchain file mips64r6el-gnu.toolchain.cmake for cross-compiling.
# 2. Specify toolchain file mips64r6el-gnu.toolchain.cmake for cross-compiling.
# 3. Configure and Generate makefiles.
# 4. make -j4 & make install
# ----------------------------------------------------------------------------------------------

Просмотреть файл

@ -58,7 +58,7 @@ foreach(sample_filename ${cpp_samples})
target_compile_definitions(${tgt} PRIVATE HAVE_OPENGL)
endif()
if(sample_filename MATCHES "simd_")
# disabled intentionally - demonstation purposes only
# disabled intentionally - demonstration purposes only
#target_include_directories(${tgt} PRIVATE "${CMAKE_CURRENT_LIST_DIR}")
#target_compile_definitions(${tgt} PRIVATE OPENCV_SIMD_CONFIG_HEADER=opencv_simd_config_custom.hpp)
#target_compile_definitions(${tgt} PRIVATE OPENCV_SIMD_CONFIG_INCLUDE_DIR=1)

Просмотреть файл

@ -12,7 +12,7 @@ static void help()
"It draws a random set of points in an image and then delaunay triangulates them.\n"
"Usage: \n"
"./delaunay \n"
"\nThis program builds the traingulation interactively, you may stop this process by\n"
"\nThis program builds the triangulation interactively, you may stop this process by\n"
"hitting any key.\n";
}

Просмотреть файл

@ -157,7 +157,7 @@ int main()
cout << responses.t() << endl;
cout << "accuracy: " << calculateAccuracyPercent(labels_test, responses) << "%" << endl;
// save the classfier
// save the classifier
const String saveFilename = "NewLR_Trained.xml";
cout << "saving the classifier to " << saveFilename << endl;
lr1->save(saveFilename);
@ -167,7 +167,7 @@ int main()
Ptr<LogisticRegression> lr2 = StatModel::load<LogisticRegression>(saveFilename);
// predict using loaded classifier
cout << "predicting the dataset using the loaded classfier...";
cout << "predicting the dataset using the loaded classifier...";
Mat responses2;
lr2->predict(data_test, responses2);
cout << "done!" << endl;

Просмотреть файл

@ -10,7 +10,7 @@
* This program demonstrates how to use OpenCV PCA with a
* specified amount of variance to retain. The effect
* is illustrated further by using a trackbar to
* change the value for retained varaince.
* change the value for retained variance.
*
* The program takes as input a text file with each line
* begin the full path to an image. PCA will be performed

Просмотреть файл

@ -36,7 +36,7 @@ public:
void load(const std::string &path);
private:
/** The current number of correspondecnes */
/** The current number of correspondences */
int n_correspondences_;
/** The list of 2D points on the model surface */
std::vector<cv::KeyPoint> list_keypoints_;

Просмотреть файл

@ -17,7 +17,7 @@ static void help()
" CAP_OPENNI_POINT_CLOUD_MAP - XYZ in meters (CV_32FC3)\n"
" CAP_OPENNI_DISPARITY_MAP - disparity in pixels (CV_8UC1)\n"
" CAP_OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1)\n"
" CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not ocluded, not shaded etc.) (CV_8UC1)\n"
" CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not occluded, not shaded etc.) (CV_8UC1)\n"
"2.) Data given from RGB image generator\n"
" CAP_OPENNI_BGR_IMAGE - color image (CV_8UC3)\n"
" CAP_OPENNI_GRAY_IMAGE - gray image (CV_8UC1)\n"

Просмотреть файл

@ -3,7 +3,7 @@
// This will loop through frames of video either from input media file
// or camera device and do processing of these data in OpenCL and then
// in OpenCV. In OpenCL it does inversion of pixels in left half of frame and
// in OpenCV it does bluring in the right half of frame.
// in OpenCV it does blurring in the right half of frame.
*/
#include <cstdio>
#include <cstdlib>

Просмотреть файл

@ -15,7 +15,7 @@ Usage:
Use sliders to adjust PSF paramitiers.
Keys:
SPACE - switch btw linear/cirular PSF
SPACE - switch btw linear/circular PSF
ESC - exit
Examples:

Просмотреть файл

@ -17,6 +17,6 @@ using namespace SDKSample;
Platform::Array<Scenario>^ MainPage::scenariosInner = ref new Platform::Array<Scenario>
{
// The format here is the following:
// { "Description for the sample", "Fully quaified name for the class that implements the scenario" }
// { "Description for the sample", "Fully qualified name for the class that implements the scenario" }
{ "Enumerate cameras and add a video effect", "SDKSample.MediaCapture.AdvancedCapture" },
};