* apply npkit
* simplify code
* cleaning
* more clean up
* cleaned up
* cleaned up
* readme clean up
* works
* minor change in naming for input args
* added DEP_CHECK in the events
Co-authored-by: Saeed Maleki <saemal@microsoft.com>
Co-authored-by: Saeed Maleki <30272783+saeedmaleki@users.noreply.github.com>
* Removing unnecessary variables.
* Removing unnecessary variable and updating the value by need.
* Replacing duplicated arithmethic operation by one operation of load and one arithmetic operation.
* Updating the pull request following guidelines in the comments of the pull request.
* Updating the pull request following guidelines in the comments of the pull request.
* 2.9.6-1
Add support for CUDA graphs.
Fuse BCM Gen4 switches to avoid suboptimal performance on some platforms. Issue #439.
Fix bootstrap issue caused by connection reordering.
Fix CPU locking block.
Improve CollNet algorithm.
Improve performance on DGX A100 for communicators with only one GPU per node.
* 2.9.8-1
Fix memory leaks.
Fix crash in bootstrap error case.
Fix Collnet clean-up issue.
Make PCI switch vendor/device optional for XML injection.
Add support for nvidia-peermem module.
* 2.9.9-1
Fix crash when setting NCCL_MAX_P2P_NCHANNELS below nchannels.
Fix hang during sendrecv dynamic NVB connection establishment on
cubemesh topologies.
Add environment variable to only use SHARP on communicators beyond
a given number of ranks.
Add debug subsystem to trace memory allocations.
Fix compilation with TRACE=1. (Issue #505)
* 2.10.3-1
Add support for bfloat16.
Add ncclAvg reduction operation.
Improve performance for aggregated operations.
Improve performance for tree.
Improve network error reporting.
Add NCCL_NET parameter to force a specific network.
Add NCCL_IB_QPS_PER_CONNECTION parameter to split IB traffic onto multiple queue pairs.
Fix topology detection error in WSL2.
Fix proxy memory elements affinity (improve alltoall performance).
Fix graph search on cubemesh topologies.
Fix hang in cubemesh during NVB connections.
* Fix to https://github.com/NVIDIA/nccl/issues/560
ncclGroup's containing operations of mixed datatype, element, or collective
would induce crash.
* 2.11.4-1
Add new API for creating a reduction operation which multiplies the input by a rank-specific scalar before doing an inter-rank summation (see: ncclRedOpCreatePreMulSum).
Improve CollNet (SHARP) performance of ncclAllReduce when captured in a CUDA Graph via user buffer registration.
Add environment variable NCCL_NET_PLUGIN="<suffix>" to allow user to choose among multiple NCCL net plugins by substituting into "libnccl-net-<suffix>.so".
Fix memory leak of NVB connections.
Fix topology detection of IB Virtual Functions (SR-IOV).
* Fix Collnet when GDR is disabled
* Fix compilation failure in "src/enqueue.cc" on older GCC because of
missing `#include <cstring>`.
* Perform `busIdToInt64` on the stack.
I noticed when I enabled `NCCL_DEBUG_SUBSYS=ALLOC` that this function is
called thousands of times, making the log output unintelligible.
Fortunately, this function can be implemented without heap allocations.
* Improve warning message about truncated messages
Display hints of cause so that it would be easier for user to debug.
Also change the error type from InternalError to InvalidUsage as most
of time this is caused by a mismatch in collective size or env settings.
* Add env NCCL_NET_DISABLE_INTRA
Disable NET transport for intra-node communication by setting the env to 1
It provides an option to error out instead of falling back to NET when superior intra-node transports (P2P and SHM) are unavailable
* Build fastsocket plugin from ext-net
* remove unused basePath
* Revert "remove unused basePath"
This reverts commit 445bc19657.
* Fix ext-net/google-fastsocket build
* Split IB parameter sanity check into two parts
First part on collective mismatch, second part on internal errors
* 2.12.7-1
Add network communication through another GPU connected with NVLink
(PXN).
Add aggregation of messages coming from different local GPUs through
PXN and going to the same destination.
Add new v5 plugin API with grouped receives and tags.
Add compat for v4 plugins.
Add naming of NCCL threads to help debugging.
Fix NVLink detection and avoid data corruption when some NVLinks are
down.
Add support for Relaxed Ordering for IB.
Add profiling and timing infrastructure.
* Add pthread_detach()'s for threads we never pthread_join(). Helps
reduce diagnostic noise for ThreadSanitizer.
Fixes https://github.com/NVIDIA/nccl/issues/649
* Remove unnecessary newline in plugin logging
Signed-off-by: Felix Abecassis <fabecassis@nvidia.com>
* Fix typo in net_ib.cc
* Display host name instead of numeric IP when referring to a peer
For easier interpretation of debug messages like "connection closed by
peer", "peer message truncated" and "peer collective mismatch"
* Fix merging error
* 2.12.10-1
Fix bug with CollNet
Fix bug with zero-bytes send/recv operations
Fix NCCL_PARAM implementation to avoid taking a lock on every call
Fix bug when setting NCCL_IB_QPS_PER_CONNECTION to more than one.
Improve error reporting for network errors.
* Update Makefile to install static library.
Make sure make install also installs the static library.
Fixes#662
* 2.12.12-1
Improve allreduce performance when we have more than one network interface per
GPU and we need to use PXN to close rings.
Add support for PCI Gen5 on 5.4 kernels.
Fix crash when setting NCCL_SET_THREAD_NAME.
Fix random crash in init due to uninitialized struct.
Fix hang on cubemesh topologies.
Add P2P_DIRECT_DISABLE parameter to disable direct access to pointers within a
process.
* progress
* progress
* device stuff are done
* should go the other way around
* p
* p
* porting
* msccl version
* buf fix for rcs
* adding msccl.h
* bug fix for count
* multiple bug fixes: reduction chain, reduce_scatter count finder, bandwidth = 1 for msccl
* Reduction in prims (#32)
* reducing inside prims generic op
* bug fix
* bug fix
* LL128 genericOP for load/store
* all 3 protocols have the right reduce operation
* lower latency for LL re
* local copy works as well
* fixed blockExitst issue
* fixing p2p nchannels
* merged with master
* bug fix in agg mode
* fix for when eqInfo list is empty
* int overflow fix for when nchunksperloop is really large
* bug fix for large buffer size chained reductions
* i_chunks and o_chunks are now checked at parsing time to prevent faulty accesses
* Fix syncthreads (#39)
* fixing barriers
* small change
* comment for why we use special send
* comment for why we use special send
* Fence hack (#40)
* fixing barriers
* small change
* comment for why we use special send
* comment for why we use special send
* adding a fence
* adding a threadfence in case of non-P2P transport
* bug fix for non LL protocols
* compilation time fix
* Fix compile (#42)
* not correct yet
* not correct yet!
* clean up
* seems done
* clean up
* bug fix
* more bug fix
* Complete -- reduces down the compilation time.
* removed printf
Co-authored-by: Sylvain Jeaugey <sjeaugey@nvidia.com>
Co-authored-by: Ke Wen <kwen@nvidia.com>
Co-authored-by: John Bachan <jbachan@nvidia.com>
Co-authored-by: Chris Jones <cjfj@deepmind.com>
Co-authored-by: Ke Wen <kw2501@fb.com>
Co-authored-by: Chang Lan <changlan@google.com>
Co-authored-by: void-main <voidmain1313113@gmail.com>
Co-authored-by: Felix Abecassis <fabecassis@nvidia.com>
Co-authored-by: Christopher Hesse <christopherhesse@users.noreply.github.com>
Co-authored-by: Jingji Chen <jingji.chen.000@gmail.com>
* fix for readAL in LL protocol
* fix for reduction op
* volatile readLL and reduction
* nchunksperloop bug fix in tuning
* unncessary instruction removed
* rolling back al reads in LL
* going back to non-volatile load in readAL
There're runtime errors when build in cuda 11.6 or above, which are
related to linking in nvcc:
* 11.6~11.6.2: "invalid device function"
* 11.7: "named symbol not found"
This commit links stride_copy obj file with relocatable device code into
a separate obj file with executable device code than nccl's collectives,
then archiving both of them in the static library.
* all dependences are pushed into one step
* reverting back the reduce operator
* error in nchunksperloop was fixed
* bug fix for allgather
* bug fix for merged dependences
* no load_coll is required with SCCL
* removed alltoall local cudaMemcpy
* fixing the freezing problem with p2p
* bug fix for nchannels in scclAlgo
* compiling with sm_35
* supports async for 1 operation
* better messaging
* bug fix with multiple xml, xml files for low latency allreduces
* fixed (#10)
* Adjusting boundaries for allreduces on A100 (#11)
* fixed
* set boundaries
* fixed scratch chunks number for ar_ll
* more details with DEBUG_SUBSYS
* set the boundaries for ar_ll128 with 20 threadblocks
* increasing the limit for ar_ll128
* Lowlatency merging (#17)
* update to dependence checker
* merged main into lowlatency -- compiles
* added a check for incorrect sequence of dependence check
* 2D Hierarchical AlltoAll Algorithm
2D hierarchical AllToAll algorithm is implemented manually with strided copy kernels and p2p sends and receives. To use it, the XML needs to follow this example:
`<algo name="2D" nchunksperloop="32" nchannels="1" proto="Simple" ngpus="32" inplace="0" coll="alltoall"></algo>`
As usual, set `SCCL_XML_FILES=<path_to_empty_xml_algo>` to use this implementation of 2D hierarchical AllToAll algorithm.