Граф коммитов

399 Коммитов

Автор SHA1 Сообщение Дата
Saeed Maleki b23e9cd5dd
fix missing signature (#58)
Co-authored-by: root <root@a100-saemal0.qxveptpukjsuthqvv514inp03c.gx.internal.cloudapp.net>
2023-09-20 11:02:36 -07:00
JihaoXin 07da7c3725
include cstdint (#57)
Co-authored-by: jihao <jihaoxin1998@gmail.com>
2023-08-23 10:40:51 -07:00
Benson Muite 3908e26f47
Remove repeated text (#56) 2023-08-22 23:09:45 -07:00
Saeed Maleki b8d148f22e using clipping only for sm_80 and higher 2023-08-21 22:36:05 +00:00
Saeed Maleki baa3a6eb61 precision clipping for half 2023-08-19 22:28:54 +00:00
Saeed Maleki 4a80db6242 updating nccl version 2023-08-19 18:32:19 +00:00
root 29f628f306 work index bug fix 2023-08-19 18:18:21 +00:00
Saeed Maleki e6ce35ce47 adding ASPLOS23 paper 2022-09-21 18:44:58 -07:00
Abhishek Jindal 944e6639b8
Adding nccl patch fixes (#47)
* adding patch file for torch 1.13

* renaming patch file
2022-09-20 16:44:36 -07:00
Saeed Maleki 357dfd0b11 updating patch for torch1.12.0 2022-09-12 20:35:41 +00:00
Saeed Maleki 9f75ec37e9 adding torch1.12 patch for ncclAllToAll 2022-09-08 21:45:22 +00:00
Saeed Maleki 84856d9198
Scratch pad is allocated in init (#46)
* allocating the MSCCL scratch buffer in init instead of enqueue

* fixing nChannel assignment

* TRACE=1 fix

* debuggin info

* removing unnecessary printfs

* typo
2022-09-08 13:52:32 -07:00
Saeed Maleki d6f8332a4c name trimming for json file 2022-08-29 23:39:48 +00:00
Ziyue Yang 0696e99095
Add Feature - Add NPKit Support in MSCCL (#44)
* apply npkit

* simplify code

* cleaning

* more clean up

* cleaned up

* cleaned up

* readme clean up

* works

* minor change in naming for input args

* added DEP_CHECK in the events

Co-authored-by: Saeed Maleki <saemal@microsoft.com>
Co-authored-by: Saeed Maleki <30272783+saeedmaleki@users.noreply.github.com>
2022-08-25 16:45:20 -07:00
Saeed Maleki 7f595cef1f bug fix for allgather LL proto 2022-08-19 22:50:16 +00:00
Angelica Moreira ebf58d0216
Minor changes in the interpreter (#45)
* Removing unnecessary variables.

* Removing unnecessary variable and updating the value by need.

* Replacing duplicated arithmethic operation by one operation of load and one arithmetic operation.

* Updating the pull request following guidelines in the comments of the pull request.

* Updating the pull request following guidelines in the comments of the pull request.
2022-08-08 16:41:06 -07:00
Saeed Maleki 16db280f4a version update 2022-08-03 22:54:41 +00:00
Saeed Maleki 72331f9b37 scripts is no longer needed 2022-08-02 20:22:27 +00:00
Saeed Maleki 040726947b updated readme with cuda graphs 2022-08-02 20:21:02 +00:00
Saeed Maleki 58b5006ded
Merging with nccl 2.12.12 (#43)
* 2.9.6-1

Add support for CUDA graphs.
Fuse BCM Gen4 switches to avoid suboptimal performance on some platforms. Issue #439.
Fix bootstrap issue caused by connection reordering.
Fix CPU locking block.
Improve CollNet algorithm.
Improve performance on DGX A100 for communicators with only one GPU per node.

* 2.9.8-1

Fix memory leaks.
Fix crash in bootstrap error case.
Fix Collnet clean-up issue.
Make PCI switch vendor/device optional for XML injection.
Add support for nvidia-peermem module.

* 2.9.9-1

Fix crash when setting NCCL_MAX_P2P_NCHANNELS below nchannels.
Fix hang during sendrecv dynamic NVB connection establishment on
cubemesh topologies.
Add environment variable to only use SHARP on communicators beyond
a given number of ranks.
Add debug subsystem to trace memory allocations.
Fix compilation with TRACE=1. (Issue #505)

* 2.10.3-1

Add support for bfloat16.
Add ncclAvg reduction operation.
Improve performance for aggregated operations.
Improve performance for tree.
Improve network error reporting.
Add NCCL_NET parameter to force a specific network.
Add NCCL_IB_QPS_PER_CONNECTION parameter to split IB traffic onto multiple queue pairs.
Fix topology detection error in WSL2.
Fix proxy memory elements affinity (improve alltoall performance).
Fix graph search on cubemesh topologies.
Fix hang in cubemesh during NVB connections.

* Fix to https://github.com/NVIDIA/nccl/issues/560

ncclGroup's containing operations of mixed datatype, element, or collective
would induce crash.

* 2.11.4-1

Add new API for creating a reduction operation which multiplies the input by a rank-specific scalar before doing an inter-rank summation (see: ncclRedOpCreatePreMulSum).
Improve CollNet (SHARP) performance of ncclAllReduce when captured in a CUDA Graph via user buffer registration.
Add environment variable NCCL_NET_PLUGIN="<suffix>" to allow user to choose among multiple NCCL net plugins by substituting into "libnccl-net-<suffix>.so".
Fix memory leak of NVB connections.
Fix topology detection of IB Virtual Functions (SR-IOV).

* Fix Collnet when GDR is disabled

* Fix compilation failure in "src/enqueue.cc" on older GCC because of
missing `#include <cstring>`.

* Perform `busIdToInt64` on the stack.

I noticed when I enabled `NCCL_DEBUG_SUBSYS=ALLOC` that this function is
called thousands of times, making the log output unintelligible.
Fortunately, this function can be implemented without heap allocations.

* Improve warning message about truncated messages

Display hints of cause so that it would be easier for user to debug.
Also change the error type from InternalError to InvalidUsage as most
of time this is caused by a mismatch in collective size or env settings.

* Add env NCCL_NET_DISABLE_INTRA

Disable NET transport for intra-node communication by setting the env to 1
It provides an option to error out instead of falling back to NET when superior intra-node transports (P2P and SHM) are unavailable

* Build fastsocket plugin from ext-net

* remove unused basePath

* Revert "remove unused basePath"

This reverts commit 445bc19657.

* Fix ext-net/google-fastsocket build

* Split IB parameter sanity check into two parts

First part on collective mismatch, second part on internal errors

* 2.12.7-1

Add network communication through another GPU connected with NVLink
(PXN).
Add aggregation of messages coming from different local GPUs through
PXN and going to the same destination.
Add new v5 plugin API with grouped receives and tags.
Add compat for v4 plugins.
Add naming of NCCL threads to help debugging.
Fix NVLink detection and avoid data corruption when some NVLinks are
down.
Add support for Relaxed Ordering for IB.
Add profiling and timing infrastructure.

* Add pthread_detach()'s for threads we never pthread_join(). Helps
reduce diagnostic noise for ThreadSanitizer.

Fixes https://github.com/NVIDIA/nccl/issues/649

* Remove unnecessary newline in plugin logging

Signed-off-by: Felix Abecassis <fabecassis@nvidia.com>

* Fix typo in net_ib.cc

* Display host name instead of numeric IP when referring to a peer

For easier interpretation of debug messages like "connection closed by
peer", "peer message truncated" and "peer collective mismatch"

* Fix merging error

* 2.12.10-1

Fix bug with CollNet
Fix bug with zero-bytes send/recv operations
Fix NCCL_PARAM implementation to avoid taking a lock on every call
Fix bug when setting NCCL_IB_QPS_PER_CONNECTION to more than one.
Improve error reporting for network errors.

* Update Makefile to install static library.

Make sure make install also installs the static library. 
Fixes #662

* 2.12.12-1

Improve allreduce performance when we have more than one network interface per
GPU and we need to use PXN to close rings.
Add support for PCI Gen5 on 5.4 kernels.
Fix crash when setting NCCL_SET_THREAD_NAME.
Fix random crash in init due to uninitialized struct.
Fix hang on cubemesh topologies.
Add P2P_DIRECT_DISABLE parameter to disable direct access to pointers within a
process.

* progress

* progress

* device stuff are done

* should go the other way around

* p

* p

* porting

* msccl version

* buf fix for rcs

* adding msccl.h

* bug fix for count

* multiple bug fixes: reduction chain, reduce_scatter count finder, bandwidth = 1 for msccl

* Reduction in prims (#32)

* reducing inside prims generic op

* bug fix

* bug fix

* LL128 genericOP for load/store

* all 3 protocols have the right reduce operation

* lower latency for LL re

* local copy works as well

* fixed blockExitst issue

* fixing p2p nchannels

* merged with master

* bug fix in agg mode

* fix for when eqInfo list is empty

* int overflow fix for when nchunksperloop is really large

* bug fix for large buffer size chained reductions

* i_chunks and o_chunks are now checked at parsing time to prevent faulty accesses

* Fix syncthreads (#39)

* fixing barriers

* small change

* comment for why we use special send

* comment for why we use special send

* Fence hack (#40)

* fixing barriers

* small change

* comment for why we use special send

* comment for why we use special send

* adding a fence

* adding a threadfence in case of non-P2P transport

* bug fix for non LL protocols

* compilation time fix

* Fix compile (#42)

* not correct yet

* not correct yet!

* clean up

* seems done

* clean up

* bug fix

* more bug fix

* Complete -- reduces down the compilation time.

* removed printf

Co-authored-by: Sylvain Jeaugey <sjeaugey@nvidia.com>
Co-authored-by: Ke Wen <kwen@nvidia.com>
Co-authored-by: John Bachan <jbachan@nvidia.com>
Co-authored-by: Chris Jones <cjfj@deepmind.com>
Co-authored-by: Ke Wen <kw2501@fb.com>
Co-authored-by: Chang Lan <changlan@google.com>
Co-authored-by: void-main <voidmain1313113@gmail.com>
Co-authored-by: Felix Abecassis <fabecassis@nvidia.com>
Co-authored-by: Christopher Hesse <christopherhesse@users.noreply.github.com>
Co-authored-by: Jingji Chen <jingji.chen.000@gmail.com>
2022-08-02 13:17:42 -07:00
Xinchi_Huang 7762d7cd99
Update devcomm.h (#37)
fix a bug in line 124
2022-07-19 07:29:06 -07:00
Saeed Maleki 088d316165
Xml malloc fix (#31)
* progress

* fixed

* moved handers inside
2022-07-12 14:58:04 -07:00
Saeed Maleki 2248b5237f
bug fixes for reduction operations (#29)
* fix for readAL in LL protocol

* fix for reduction op

* volatile readLL and  reduction

* nchunksperloop bug fix in tuning

* unncessary instruction removed

* rolling back al reads in LL

* going back to non-volatile load in readAL
2022-07-11 14:18:02 -07:00
Saeed Maleki cbe70894b4 NSDI paper 2022-07-08 22:54:10 +00:00
Saeed Maleki 89afb08ee7 bug fix for chained reductions 2022-07-02 09:06:57 +00:00
Yifan Xiong 80ce192467
Fix nvcc link issue for alltoall in cuda 11.6+ (#24)
There're runtime errors when build in cuda 11.6 or above, which are
related to linking in nvcc:
* 11.6~11.6.2: "invalid device function"
* 11.7: "named symbol not found"

This commit links stride_copy obj file with relocatable device code into
a separate obj file with executable device code than nccl's collectives,
then archiving both of them in the static library.
2022-06-08 17:13:22 -07:00
Saeed Maleki 1d3f0a4917 temp fix for strided copy 2022-05-25 07:22:42 +00:00
microsoft-github-policy-service[bot] d942ed00aa
Microsoft mandatory file (#22)
Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
2022-05-24 13:07:28 -07:00
Saeed Maleki 6f6ceeb952 typo fixes 2022-05-23 20:39:17 +00:00
Saeed Maleki ecf3c4f22a
Renaming (#21)
* all names changed to msccl now

* renaming successful

* readme

* renaming done
2022-05-23 13:16:29 -07:00
Saeed Maleki 62b3fd3ad7 bug fix and removed xml folder 2022-05-20 22:35:47 +00:00
Saeed Maleki 61b82610de
Custom pointwise compute op (#20)
* custom op -- compiles

* complete -- buggy for count > 1

* bug fix with count > 1-- res_add is also working
2022-05-19 14:11:28 -07:00
Saeed Maleki 332f35bfbf micro optimization for dependence checks 2022-05-17 23:19:40 +00:00
Saeed Maleki be9534e65a fix long name problem in an algo.xml 2022-05-13 21:41:48 +00:00
Saeed Maleki daf7711d08 LIBSRCFILES now has custom collective 2022-05-11 20:48:17 +00:00
Saeed Maleki 071b6129e7 SCCL_REDUCE is no longer needed in the long instruction list 2022-05-06 21:29:26 +00:00
Saeed Maleki 77bf9ce046 fixing a bug in custom collective API 2022-05-06 18:27:59 +00:00
Saeed Maleki 3447f0dd1b
removing the concept of nactives as we only allow for one MSCCL algo in the entire program. 2022-05-06 11:20:06 -07:00
Saeed Maleki 052347079a bug fix: in case async mode is used, only one MSCCL kernel can be queued at a time 2022-05-03 00:57:00 +00:00
Saeed Maleki 0969abc018 bug fix for nchannels > 1 2022-04-27 22:55:57 +00:00
Saeed Maleki 8cc0484312 adding the new optimized xml for small allreduce 2022-04-11 22:38:41 +00:00
Saeed Maleki dc74746f25 dead code elimination 2022-04-11 22:29:20 +00:00
Saeed Maleki e74fa51863
Lowlatency (#18)
* all dependences are pushed into one step

* reverting back the reduce operator

* error in nchunksperloop was fixed

* bug fix for allgather

* bug fix for merged dependences

* no load_coll is required with SCCL

* removed alltoall local cudaMemcpy

* fixing the freezing problem with p2p

* bug fix for nchannels in scclAlgo

* compiling with sm_35

* supports async for 1 operation

* better messaging

* bug fix with multiple xml, xml files for low latency allreduces

* fixed (#10)

* Adjusting boundaries for allreduces on A100 (#11)

* fixed

* set boundaries

* fixed scratch chunks number for ar_ll

* more details with DEBUG_SUBSYS

* set the boundaries for ar_ll128 with 20 threadblocks

* increasing the limit for ar_ll128

* Lowlatency merging (#17)

* update to dependence checker

* merged main into lowlatency -- compiles

* added a check for incorrect sequence of dependence check
2022-04-11 14:27:23 -07:00
Meghan Cowan cec1416cb0 Adding a build script 2022-03-15 10:45:23 -07:00
Yifan Xiong 10d847e3f9
Add 2D Hierarchical AlltoAll Algorithm (#15)
* 2D Hierarchical AlltoAll Algorithm

2D hierarchical AllToAll algorithm is implemented manually with strided copy kernels and p2p sends and receives. To use it, the XML needs to follow this example:

`<algo name="2D" nchunksperloop="32" nchannels="1" proto="Simple" ngpus="32" inplace="0" coll="alltoall"></algo>`

As usual, set `SCCL_XML_FILES=<path_to_empty_xml_algo>` to use this implementation of 2D hierarchical AllToAll algorithm.
2022-03-07 21:33:53 -08:00
Madan Musuvathi 58bae9b2ef updated copyright and license 2022-02-23 12:51:29 -08:00
Ziyue Yang 3851115cc0
revise readme (#14)
Co-authored-by: Ziyue Yang <Ziyue Yang>
2022-02-16 12:58:25 -08:00
Saeed Maleki ecc5b19623 increasing the max #threadblocks. 2022-02-15 02:54:48 +00:00
Saeed Maleki 0f5f34bc85 NET shared buffer env is no longer needed to be set for sccl 2022-02-10 23:24:43 +00:00
Saeed Maleki 667952a6df adding the checks for alltoall with SCCL_CONFIG 2022-02-10 22:17:05 +00:00