OpenCV  4.10.0
开源计算机视觉
加载中...
搜索中...
没有匹配
公共类型 | 公共成员函数 | 静态公共成员函数 | 公共属性 | 所有成员列表
cv::cuda::HostMem 类参考

一个类,它封装了CUDA的特殊内存类型分配函数,具有引用计数。有关详细信息,请参阅这里

#include <opencv2/core/cuda.hpp>

cv::cuda::HostMem 的协作图

公共类型

枚举AllocType {
  PAGE_LOCKED = 1 ,
  SHARED = 2 ,
  WRITE_COMBINED = 4
}
 

公共成员函数

 HostMem (const HostMem &m)
 
 HostMem (HostMem::AllocType alloc_type=HostMem::AllocType::PAGE_LOCKED)
 
 HostMem (InputArray arr, HostMem::AllocType alloc_type=HostMem::AllocType::PAGE_LOCKED)
 从主机内存复制数据创建
 
 HostMem (int rows, int cols, int type, HostMem::AllocType alloc_type=HostMem::AllocType::PAGE_LOCKED)
 
 HostMem (Size size, int type, HostMem::AllocType alloc_type=HostMem::AllocType::PAGE_LOCKED)
 
 ~HostMem ()
 
int channels () const
 
HostMem clone () const
 返回矩阵的深拷贝,即数据被复制
 
void create (int rows, int cols, int type)
 除非矩阵已具有指定的尺寸和类型,否则分配新的矩阵数据。
 
void create (Size size, int type)
 
GpuMat createGpuMatHeader () const
 将CPU内存映射到GPU地址空间,并创建不带引用计数的cuda::GpuMat头。
 
Mat createMatHeader () const
 返回不带引用计数的HostMem数据矩阵头。
 
int depth () const
 
size_t elemSize () const
 
size_t elemSize1 () const
 
bool empty () const
 
bool isContinuous () const
 
HostMemoperator= (const HostMem &m)
 
void release ()
 减少引用计数并在需要时释放内存。
 
HostMem reshape (int cn, int rows=0) const
 
Size size () const
 
size_t step1 () const
 
void swap (HostMem &b)
 与其他智能指针交换。
 
int type () const
 

静态公共成员函数

static MatAllocatorgetAllocator (HostMem::AllocType alloc_type=HostMem::AllocType::PAGE_LOCKED)
 

公共属性

AllocType alloc_type
 
int cols
 
uchardata
 
const uchardataend
 
uchardatastart
 
int flags
 
int * refcount
 
int rows
 
size_t step
 

详细信息

带有CUDA特殊内存类型分配函数引用计数的类。

其接口类似于Mat,但增加了内存类型参数。

注意
此类内存的分配大小通常有限。更多详细信息,请参阅《CUDA 2.2 Pinned Memory APIs》文档或《CUDA C Programming Guide》。

成员枚举文档

◆ AllocType

枚举值
PAGE_LOCKED 
SHARED 
WRITE_COMBINED 

构造函数 & 析构函数文档

◆ HostMem() [1/5]

cv::cuda::HostMem::HostMem ( HostMem::AllocType  alloc_type = HostMem::AllocType::PAGE_LOCKED)
explicit
Python
cv.cuda.HostMem([, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

◆ HostMem() [2/5]

cv::cuda::HostMem::HostMem ( const HostMem m)
Python
cv.cuda.HostMem([, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

◆ HostMem() [3/5]

cv::cuda::HostMem::HostMem ( int  rows,
int  cols,
int  type,
HostMem::AllocType  alloc_type = HostMem::AllocType::PAGE_LOCKED 
)
Python
cv.cuda.HostMem([, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

◆ HostMem() [4/5]

cv::cuda::HostMem::HostMem ( Size  size,
int  type,
HostMem::AllocType  alloc_type = HostMem::AllocType::PAGE_LOCKED 
)
Python
cv.cuda.HostMem([, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

◆ HostMem() [5/5]

cv::cuda::HostMem::HostMem ( InputArray  arr,
HostMem::AllocType  alloc_type = HostMem::AllocType::PAGE_LOCKED 
)
explicit
Python
cv.cuda.HostMem([, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
cv.cuda.HostMem(arr[, alloc_type]) -> <cuda_HostMem object>

从主机内存复制数据创建

◆ ~HostMem()

cv::cuda::HostMem::~HostMem ( )

成员函数文档

◆ channels()

int cv::cuda::HostMem::channels ( ) const
Python
cv.cuda.HostMem.channels() -> retval

◆ clone()

HostMem cv::cuda::HostMem::clone ( ) const
Python
cv.cuda.HostMem.clone() -> retval

返回矩阵的深拷贝,即数据被复制

◆ create() [1/2]

void cv::cuda::HostMem::create ( int  rows,
int  cols,
int  type 
)
Python
cv.cuda.HostMem.create(rows, cols, type) -> None

除非矩阵已具有指定的尺寸和类型,否则分配新的矩阵数据。

◆ create() [2/2]

void cv::cuda::HostMem::create ( Size  size,
int  type 
)
Python
cv.cuda.HostMem.create(rows, cols, type) -> None

◆ createGpuMatHeader()

GpuMat cv::cuda::HostMem::createGpuMatHeader ( ) const

将CPU内存映射到GPU地址空间,并创建不带引用计数的cuda::GpuMat头。

This can be done only if memory was allocated with the SHARED flag and if it is supported by the hardware. Laptops often share video and CPU memory, so address spaces can be mapped, which eliminates an extra copy.

◆ createMatHeader()

Mat cv::cuda::HostMem::createMatHeader ( ) const
Python
cv.cuda.HostMem.createMatHeader() -> retval

返回不带引用计数的HostMem数据矩阵头。

◆ depth()

int cv::cuda::HostMem::depth ( ) const
Python
cv.cuda.HostMem.depth() -> retval

◆ elemSize()

size_t cv::cuda::HostMem::elemSize ( ) const
Python
cv.cuda.HostMem.elemSize() -> retval

◆ elemSize1()

size_t cv::cuda::HostMem::elemSize1 ( ) const
Python
cv.cuda.HostMem.elemSize1() -> retval

◆ empty()

bool cv::cuda::HostMem::empty ( ) const
Python
cv.cuda.HostMem.empty() -> retval

◆ getAllocator()

静态 MatAllocator * cv::cuda::HostMem::getAllocator ( HostMem::AllocType  alloc_type = HostMem::AllocType::PAGE_LOCKED)
静态

◆ isContinuous()

bool cv::cuda::HostMem::isContinuous ( ) const
Python
cv.cuda.HostMem.isContinuous() -> retval

◆ operator=()

HostMem & cv::cuda::HostMem::operator= ( const HostMem m)

◆ release()

void cv::cuda::HostMem::release ( )

减少引用计数并在需要时释放内存。

◆ reshape()

HostMem cv::cuda::HostMem::reshape ( int  cn,
int  rows = 0 
) const
Python
cv.cuda.HostMem.reshape(cn[, rows]) -> retval

为相同数据创建具有不同通道数和/或不同行数的不同HostMem

◆ size()

Size cv::cuda::HostMem::size ( ) const
Python
cv.cuda.HostMem.size() -> retval

◆ step1()

size_t cv::cuda::HostMem::step1 ( ) const
Python
cv.cuda.HostMem.step1() -> retval

◆ swap()

void cv::cuda::HostMem::swap ( HostMem b)
Python
cv.cuda.HostMem.swap(b) -> None

与其他智能指针交换。

◆ type()

int cv::cuda::HostMem::type ( ) const
Python
cv.cuda.HostMem.type() -> retval

成员变量文档

◆ alloc_type

AllocType cv::cuda::HostMem::alloc_type

◆ cols

int cv::cuda::HostMem::cols

◆ data

uchar* cv::cuda::HostMem::data

◆ dataend

const uchar* cv::cuda::HostMem::dataend

◆ datastart

uchar* cv::cuda::HostMem::datastart

◆ flags

int cv::cuda::HostMem::flags

◆ refcount

int* cv::cuda::HostMem::refcount

◆ rows

int cv::cuda::HostMem::rows

◆ step

size_t cv::cuda::HostMem::step

本类的文档是由以下文件生成的