9个C++版本的数据计算库

目录

1 Eigen

2 Blaze

3 Armadillo

4 Blitz

5 IT++

6 Dlib

7 Ublas

8 Xtensor

9 MKL


1 Eigen

  • Eigen is versatile.
    • It supports all matrix sizes, from small fixed-size matrices to arbitrarily large dense matrices, and even sparse matrices.
    • It supports all standard numeric types, including std::complex, integers, and is easily extensible to custom numeric types.
    • It supports various matrix decompositions and geometry features.
    • Its ecosystem of unsupported modules provides many specialized features such as non-linear optimization, matrix functions, a polynomial solver, FFT, and much more.
  • Eigen is fast.
    • Expression templates allow intelligently removing temporaries and enable lazy evaluation, when that is appropriate.
    • Explicit vectorization is performed for SSE 2/3/4, AVX, AVX2, FMA, AVX512, ARM NEON (32-bit and 64-bit), PowerPC AltiVec/VSX (32-bit and 64-bit), ZVector (s390x/zEC13) SIMD instruction sets, and since 3.4 MIPS MSA with graceful fallback to non-vectorized code.
    • Fixed-size matrices are fully optimized: dynamic memory allocation is avoided, and the loops are unrolled when that makes sense.
    • For large matrices, special attention is paid to cache-friendliness.
  • Eigen is reliable.
    • Algorithms are carefully selected for reliability. Reliability trade-offs are clearly documented and extremely safe decompositions are available.
    • Eigen is thoroughly tested through its own test suite (over 500 executables), the standard BLAS test suite, and parts of the LAPACK test suite.
  • Eigen is elegant.
    • The API is extremely clean and expressive while feeling natural to C++ programmers, thanks to expression templates.
    • Implementing an algorithm on top of Eigen feels like just copying pseudocode.
  • Eigen has good compiler support as we run our test suite against many compilers to guarantee reliability and work around any compiler bugs. Eigen up to version 3.4 is standard C++03 and maintains reasonable compilation times. Versions following 3.4 will be C++14.

Eigen

2 Blaze

Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic. With its state-of-the-art Smart Expression Template implementation Blaze combines the elegance and ease of use of a domain-specific language with HPC-grade performance, making it one of the most intuitive and fastest C++ math libraries available.

The Blaze library offers ...

  • ... high performance through the integration of BLAS libraries and manually tuned HPC math kernels
  • ... vectorization by SSE, SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX-512, FMA, SVML and SLEEF
  • ... parallel execution by OpenMP, HPX, C++11 threads and Boost threads
  • ... the intuitive and easy to use API of a domain specific language
  • ... unified arithmetic with dense and sparse vectors and matrices
  • ... thoroughly tested matrix and vector arithmetic
  • ... completely portablehigh quality C++ source code

Get an impression of the clear but powerful syntax of Blaze in the Getting Started tutorial and of the impressive performance in the Benchmarks section.

Bitbucket

3 Armadillo

  1. Armadillo is a high quality linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use 
  2. Provides high-level syntax and functionality deliberately similar to Matlab 
  3. Useful for algorithm development directly in C++, or quick conversion of research code into production environments 
  4. Provides efficient classes for vectors, matrices and cubes; dense and sparse matrices are supported 
  5. Integer, floating point and complex numbers are supported 
  6. A sophisticated expression evaluator (based on template meta-programming) automatically combines several operations to increase speed and efficiency 
  7. Dynamic evaluation automatically chooses optimal code paths based on detected matrix structures 
  8. Various matrix decompositions (eigen, SVD, QR, etc) are provided through integration with LAPACK, or one of its high performance drop-in replacements (eg. MKL or OpenBLAS
  9. Can automatically use OpenMP multi-threading (parallelisation) to speed up computationally expensive operations 
  10. Distributed under the permissive Apache 2.0 license, useful for both open-source and proprietary (closed-source) software 
  11. Can be used for machine learning, pattern recognition, computer vision, signal processing, bioinformatics, statistics, finance, etc

Armadillo: C++ library for linear algebra & scientific computing

4 Blitz

Blitz++ is a C++ template class library that provides high-performance multidimensional array containers for scientific computing.

Blitz++ has gone through some changes in location:

The original Blitz++ website was located at http://oonumerics.org/blitz (archived at The Object-Oriented Numerics Page).

Blitz++ then moved to SourceForge, at Blitz++ Library download | SourceForge.net.

The latest maintained version of Blitz++ is now on GitHub, at https://github.com/blitzpp/blitz

Diverse information on Blitz++ is now being catalogued at the GitHub wiki: http://github.com/blitzpp/blitz/wiki/

Licensing information is detailed in the LEGAL file. Summary: you can do anything except sell this library in source form. Blitz is licensed under either the Lesser GPL version 3 license (see COPYING and COPYING.LESSER), the BSD license (see COPYRIGHT), and the less restrictive Perl "artistic license" version 2.0 (see LICENSE).

Blitz++ uses CMake for build, test and installation automation. For details on using CMake consult Documentation | CMake In short, the following steps should work on UNIX-like systems:

Blitz++ is a meta-template library for array manipulation in C++ with a speed comparable to Fortran implementations, while preserving an object-oriented interface.

Blitz++ is implemented using expression template techniques thanks to which optimizations such as loop fusion, unrolling, tiling, and algorithm specialization can be performed automatically at compile time without relying on compiler optimisations.

The key rationale behind development of Blitz++ has been that the scientific computing requires domain-specific abstractions, such as arrays, matrices, and tensors. Building such abstractions into a language (such as arrays in Fortran 90) can result in fast code, but may also be limiting: such abstractions are hard to extend or modify, and economics restrict the number of features which may be included in a compiler. The solution offered by Blitz++ is to move high-level optimizations out of compilers and into libraries. The Blitz++ library demonstrates how this may be done in C++. The mechanisms are somewhat crude, but the results are appealing: Blitz++ arrays offer functionality and efficiency competitive with Fortran 90, but without any language extensions. The Blitz++ library is able to parse and analyse array expressions at compile time, and performs loop transformations which have until now been the responsibility of optimizing compilers. Furthermore, being a library allows to incorporate new and useful features - independent of compiler. Some examples of such extensions featured in Blitz++ are flexible storage formats, tensor notation and index placeholders.

Blitz++ has matured back in early noughties and its development has entered maintenance mode as of the time of writing.

https://github.com/blitzpp/blitz

5 IT++

IT++ is a C++ library of mathematical, signal processing and communication classes and functions. Its main use is in simulation of communication systems and for performing research in the area of communications. The kernel of the library consists of generic vector and matrix classes, and a set of accompanying routines. Such a kernel makes IT++ similar to MATLAB, GNU Octave or SciPy.

The IT++ library originates from the former department of Information Theory at the Chalmers University of Technology, Gothenburg, Sweden. Because the library is coded in C++, the name IT++ seemed like a good idea at the time. While departments come and go, IT++ have developed a life of its' own and is now released under the terms of the GNU General Public License (GPL) for you to enjoy.

IT++ is being developed and widely used by researchers who work in the area of communications, both in the industry and at universities. In 2005, 2006 and 2007, IT++ was developed as a part of the European Network of Excellence in Wireless Communications (NEWCOM).

IT++ makes an extensive use of existing open-source or commercial libraries for increased functionality, speed and accuracy. In particular BLAS, LAPACK and FFTW libraries can be used. Instead of the reference BLAS and LAPACK implementations, some optimized platform-specific libraries can be used as well, i.e.:

  • ATLAS (Automatically Tuned Linear Algebra Software) - includes optimised BLAS and a limited set of LAPACK routines
  • MKL (Intel Math Kernel Library) - includes all required BLAS, LAPACK and FFT routines (FFTW not required)
  • ACML (AMD Core Math Library) - includes BLAS, LAPACK and FFT routines (FFTW not required)

It is possible to compile and use IT++ without any of the above listed libraries, but the functionality will be reduced.

IT++ should work on GNU/Linux, Sun Solaris, Microsoft Windows (with Cygwin, MinGW/MSYS or Microsoft Visual C++) and Mac OS X operating systems.

Welcome to IT++!

6 Dlib

Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib's open source licensing allows you to use it in any application, free of charge.

 To follow or participate in the development of dlib subscribe to dlib on github. Also be sure to read the how to contribute page if you intend to submit code to the project.

 To quickly get started using dlib, follow these instructions to build dlib.

Its major features: High Quality Portable,Code Machine Learning Algorithms, Numerical Algorithms, Graphical Model Inference Algorithms, Image Processing, Threading, Networking, Graphical User Interfaces, Data Compression and Integrity Algorithms, Testing, General Utilities,

dlib C++ Library

7 Ublas

uBLAS is a C++ template class library that provides BLAS level 1, 2, 3 functionality for dense, packed and sparse matrices. The design and implementation unify mathematical notation via operator overloading and efficient code generation via expression templates.

uBLAS provides templated C++ classes for dense, unit and sparse vectors, dense, identity, triangular, banded, symmetric, hermitian and sparse matrices. Views into vectors and matrices can be constructed via ranges, slices, adaptor classes and indirect arrays. The library covers the usual basic linear algebra operations on vectors and matrices: reductions like different norms, addition and subtraction of vectors and matrices and multiplication with a scalar, inner and outer products of vectors, matrix vector and matrix matrix products and triangular solver. The glue between containers, views and expression templated operations is a mostly STL conforming iterator interface.

https://www.boost.org/doc/libs/1_68_0/libs/numeric/ublas/doc/index.html

8 Xtensor

xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions.

xtensor provides

  • an extensible expression system enabling lazy broadcasting.
  • an API following the idioms of the C++ standard library.
  • tools to manipulate array expressions and build upon xtensor.

Containers of xtensor are inspired by NumPy, the Python array programming library. Adaptors for existing data structures to be plugged into the expression system can easily be written.

In fact, xtensor can be used to process numpy data structures in-place using Pythons buffer protocol. For more details on the numpy bindings, check out the xtensor-python project. Language bindings for R and Julia are also available.

xtensor requires a modern C++ compiler supporting C++14. The following C++ compilers are supported:

  • On Windows platforms, Visual C++ 2015 Update 2, or more recent
  • On Unix platforms, gcc 4.9 or a recent version of Clang

Introduction — xtensor documentation

9 MKL

MKL (Math Kernel Library) is Intel's mathematical libraries for use on Itanium2 processors and contains the following classes of routines:  BLAS FFTs LAPACK cblas interface Vector Math Library (VML) Vector Statistical Library (VSL) DFTs

Linear Algebra

Speed up linear algebra computations with low-level routines that operate on vectors and matrices, and are compatible with these industry-standard BLAS and LAPACK operations:

  • Level 1: Vector-vector operations
  • Level 2: Matrix-vector operations
  • Level 3: Matrix-matrix operations

Sparse Linear Algebra Functions

Perform various operations on sparse matrices with low-level and inspector-executor routines including the following:

  • Multiply sparse matrix with dense vector
  • Multiply sparse matrix with dense matrix
  • Solve linear systems with triangular sparse matrices
  • Solve linear systems with general sparse matrices

Fast Fourier Transforms (FFT)

Transform a signal from its original domain (typically time or space) into a representation in the frequency domain and back. Use FFT functions in one, two, or three dimensions with support for mixed radices. The supported functions include complex-to-complex and real-to-complex transforms of arbitrary length in single-precision and double-precision.

Random Number Generator Functions (RNG)
Use common pseudorandom, quasi-random, and non-deterministic random number engines to solve continuous and discrete distributions.

Data Fitting

Provide spline-based interpolation capabilities that you can use to approximate functions, function derivatives or integrals, and perform cell search operations.

Vector Math
Balance accuracy and performance with vector-based elementary functions. Manipulate values with traditional algebraic and trigonometric functions.

Summary Statistics 
Compute basic statistical estimates (such as raw or central sums and moments) for single- and double-precision multidimensional datasets

Accelerate Fast Math with Intel® oneAPI Math Kernel Library

posted @ 2022-08-21 10:12  Oliver2022  阅读(103)  评论(0编辑  收藏  举报