Remove dnf update from docker build scripts (#17551)

### Description
1. Remove 'dnf update' from docker build scripts, because it upgrades TRT
packages from CUDA 11.x to CUDA 12.x.
To reproduce it, you can run the following commands in a CentOS CUDA
11.x docker image such as nvidia/cuda:11.8.0-cudnn8-devel-ubi8.
```
export v=8.6.1.6-1.cuda11.8
dnf  install -y libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} libnvinfer-vc-plugin8-${v}        libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} libnvinfer-vc-plugin-devel-${v} libnvinfer-headers-devel-${v}  libnvinfer-headers-plugin-devel-${v} 
dnf update -y
```
The last command will generate the following outputs:
```
========================================================================================================================
 Package                                     Architecture       Version                          Repository        Size
========================================================================================================================
Upgrading:
 libnvinfer-devel                            x86_64             8.6.1.6-1.cuda12.0               cuda             542 M
 libnvinfer-headers-devel                    x86_64             8.6.1.6-1.cuda12.0               cuda             118 k
 libnvinfer-headers-plugin-devel             x86_64             8.6.1.6-1.cuda12.0               cuda              14 k
 libnvinfer-plugin-devel                     x86_64             8.6.1.6-1.cuda12.0               cuda              13 M
 libnvinfer-plugin8                          x86_64             8.6.1.6-1.cuda12.0               cuda              13 M
 libnvinfer-vc-plugin-devel                  x86_64             8.6.1.6-1.cuda12.0               cuda             107 k
 libnvinfer-vc-plugin8                       x86_64             8.6.1.6-1.cuda12.0               cuda             251 k
 libnvinfer8                                 x86_64             8.6.1.6-1.cuda12.0               cuda             543 M
 libnvonnxparsers-devel                      x86_64             8.6.1.6-1.cuda12.0               cuda             467 k
 libnvonnxparsers8                           x86_64             8.6.1.6-1.cuda12.0               cuda             757 k
 libnvparsers-devel                          x86_64             8.6.1.6-1.cuda12.0               cuda             2.0 M
 libnvparsers8                               x86_64             8.6.1.6-1.cuda12.0               cuda             854 k
Installing dependencies:
 cuda-toolkit-12-0-config-common             noarch             12.0.146-1                       cuda             7.7 k
 cuda-toolkit-12-config-common               noarch             12.2.140-1                       cuda             7.9 k
 libcublas-12-0                              x86_64             12.0.2.224-1                     cuda             361 M
 libcublas-devel-12-0                        x86_64             12.0.2.224-1                     cuda             397 M

Transaction Summary
========================================================================================================================

```
As you can see from the output,  they are CUDA 12 packages. 

The problem can also be solved by lock the packages' versions by using
"dnf versionlock" command right after installing the CUDA/TRT packages.
However, going forward, to get the better reproducibility, I suggest
manually fix dnf package versions in the installation scripts like we do
for TRT now.

```bash
v="8.6.1.6-1.cuda11.8" &&\
    yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo &&\
    yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} libnvinfer-vc-plugin8-${v}\
        libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} libnvinfer-vc-plugin-devel-${v} libnvinfer-headers-devel-${v}  libnvinfer-headers-plugin-devel-${v}
```
When we have a need to upgrade a package due to security alert or some
other reasons, we manually change the version string instead of relying
on "dnf update". Though this approach increases efforts, it can make our
pipeines more stable.

2. Move python test to docker
### Motivation and Context
Right now the nightly gpu package mixes using CUDA 11.x and CUDA 12.x
and the result package is totally not usable(crashes every time)
This commit is contained in:
Changming Sun 2023-09-21 07:33:29 -07:00 коммит произвёл GitHub
Родитель 038c76378f
Коммит 57dfd15d7b
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
32 изменённых файлов: 351 добавлений и 244 удалений

Просмотреть файл

@ -200,8 +200,11 @@ stages:
- stage: arm64_test
dependsOn: ['arm64_build']
jobs:
- template: templates/py-packaging-linux-test.yml
- template: templates/py-packaging-linux-test-cpu.yml
parameters:
arch: 'aarch64'
machine_pool: 'onnxruntime-linux-ARM64-CPU-2019'
device: 'CPU'
base_image: 'arm64v8/almalinux:8'
devtoolset_rootpath: /opt/rh/gcc-toolset-12/root
ld_library_path_arg: /opt/rh/gcc-toolset-12/root/usr/lib64:/opt/rh/gcc-toolset-12/root/usr/lib:/opt/rh/gcc-toolset-12/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-12/root/usr/lib/dyninst:/usr/local/lib64
prepend_path: '/opt/rh/gcc-toolset-12/root/usr/bin:'

Просмотреть файл

@ -3,24 +3,38 @@ resources:
- pipeline: build
source: 'Python packaging pipeline'
trigger: true
branch: main # branch to pick the artifact, Used only for manual triggered pipeline runs for testing the pipeline itself
#TODO: Remove the following dependency. Running python tests should not need to use manylinux.
repositories:
- repository: manylinux # The name used to reference this repository in the checkout step
type: Github
endpoint: Microsoft
name: pypa/manylinux
ref: 5eda9aded5462201e6310105728d33016e637ea7
stages:
- stage: Linux_Test_CPU_x86_64_stage
jobs:
- template: templates/py-packaging-linux-test.yml
- template: templates/py-packaging-linux-test-cpu.yml
parameters:
arch: 'x86_64'
machine_pool: 'onnxruntime-Ubuntu2004-AMD-CPU'
device: 'CPU'
base_image: 'registry.access.redhat.com/ubi8/ubi'
devtoolset_rootpath: /opt/rh/gcc-toolset-12/root
ld_library_path_arg: /opt/rh/gcc-toolset-12/root/usr/lib64:/opt/rh/gcc-toolset-12/root/usr/lib:/opt/rh/gcc-toolset-12/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-12/root/usr/lib/dyninst:/usr/local/lib64
prepend_path: '/opt/rh/gcc-toolset-12/root/usr/bin:'
- stage: Linux_Test_CPU_aarch64_stage
dependsOn: []
jobs:
- template: templates/py-packaging-linux-test.yml
- template: templates/py-packaging-linux-test-cpu.yml
parameters:
arch: 'aarch64'
machine_pool: 'aiinfra-linux-ARM64-CPU-2019'
device: 'CPU'
base_image: 'arm64v8/almalinux:8'
devtoolset_rootpath: /opt/rh/gcc-toolset-12/root
ld_library_path_arg: /opt/rh/gcc-toolset-12/root/usr/lib64:/opt/rh/gcc-toolset-12/root/usr/lib:/opt/rh/gcc-toolset-12/root/usr/lib64/dyninst:/opt/rh/gcc-toolset-12/root/usr/lib/dyninst:/usr/local/lib64
prepend_path: '/opt/rh/gcc-toolset-12/root/usr/bin:'
- stage: Packages_Somking_Test
dependsOn: []
@ -31,19 +45,6 @@ stages:
machine_pool:
vmImage: 'macOS-13'
itemPattern: '*/*mac*x86_64.whl'
- template: templates/py-package-smoking-test.yml
parameters:
job_name: Test_WIN_64_Wheels
itemPattern: '*/*win_amd64.whl'
machine_pool:
vmImage: 'windows-2022'
- template: templates/py-package-smoking-test.yml
parameters:
job_name: Test_WIN_32_Wheels
itemPattern: '*/*win32.whl'
python_arch: 'x86'
machine_pool:
vmImage: 'windows-2022'
- template: templates/py-package-smoking-test.yml
parameters:
job_name: Test_LINUX_x86_64_Wheels
@ -61,7 +62,7 @@ stages:
- Linux_Test_CPU_aarch64_stage
- Packages_Somking_Test
jobs:
- template: templates/py-packaging-linux-test.yml
- template: templates/py-packaging-linux-test-cuda.yml
parameters:
arch: 'x86_64'
machine_pool: 'Onnxruntime-Linux-GPU'

Просмотреть файл

@ -68,7 +68,7 @@ jobs:
script: |
mkdir -p $HOME/.onnx
docker run --rm -e CFLAGS="${{parameters.OnnxruntimeCFlags}}" -e CXXFLAGS="${{parameters.OnnxruntimeCXXFlags}}" --volume /data/onnx:/data/onnx:ro --volume $(Build.SourcesDirectory):/onnxruntime_src --volume $(Build.BinariesDirectory):/build \
--volume $HOME/.onnx:/home/onnxruntimedev/.onnx -e NIGHTLY_BUILD onnxruntimecpubuildcentos8${{parameters.OnnxruntimeArch}} /bin/bash -c "python3 \
--volume $HOME/.onnx:/home/onnxruntimedev/.onnx -e NIGHTLY_BUILD onnxruntimecpubuildcentos8${{parameters.OnnxruntimeArch}} /bin/bash -c "python3.9 \
/onnxruntime_src/tools/ci_build/build.py --build_java --build_nodejs --build_dir /build --config Release \
--skip_submodule_sync --parallel --build_shared_lib ${{ parameters.AdditionalBuildFlags }} && cd /build/Release && make install DESTDIR=/build/linux-${{parameters.OnnxruntimeArch}}"
workingDirectory: $(Build.SourcesDirectory)

Просмотреть файл

@ -39,36 +39,22 @@ jobs:
versionSpec: $(PythonVersion)
architecture: ${{ parameters.python_arch }}
- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact'
inputs:
artifactName: 'onnxruntime'
targetPath: '$(Build.BinariesDirectory)/whl'
itemPattern: ${{parameters.itemPattern}}
# The public ADO project
${{ if eq(variables['System.CollectionId'], 'f3ad12f2-e480-4533-baf2-635c95467d29') }}:
buildType: current
# The private ADO project
${{ if eq(variables['System.CollectionId'], 'bc038106-a83b-4dab-9dd3-5a41bc58f34c') }}:
project: '530acbc4-21bc-487d-8cd8-348ff451d2ff'
definition: 841
preferTriggeringPipeline: true
runVersion: 'latest'
buildType: specific
- download: build # pipeline resource identifier.
artifact: 'onnxruntime'
- task: Bash@3
inputs:
targetType: 'inline'
script: |
set -ex
files=(whl/*.whl)
files=(*.whl)
FILE_NAME="${files[0]}"
FILE_NAME=$(basename $FILE_NAME)
PYTHON_PACKAGE_NAME=$(echo "$FILE_NAME" | cut -f 1 -d '-')
python3 -m pip install --find-links "$(Build.BinariesDirectory)/whl" $PYTHON_PACKAGE_NAME
pip show $PYTHON_PACKAGE_NAME
python -c "import onnxruntime as ort; print(ort.__version__)"
workingDirectory: $(Build.BinariesDirectory)
python3 -m pip install --find-links "$(Pipeline.Workspace)/build/onnxruntime" $PYTHON_PACKAGE_NAME
python3 -m pip show $PYTHON_PACKAGE_NAME
python3 -c "import onnxruntime as ort; print(ort.__version__)"
workingDirectory: $(Pipeline.Workspace)/build/onnxruntime
displayName: Test Package Installation
- task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3

Просмотреть файл

@ -0,0 +1,117 @@
parameters:
- name: arch
type: string
- name: base_image
type: string
- name: devtoolset_rootpath
type: string
- name: ld_library_path_arg
type: string
- name: prepend_path
type: string
- name: machine_pool
type: string
- name: extra_job_id
type: string
default: ''
- name: python_wheel_suffix
type: string
default: ''
# TODO: Ideally it should fetch information from the build that triggers it
- name: cmake_build_type
type: string
default: 'Release'
values:
- Debug
- Release
- RelWithDebInfo
- MinSizeRel
- name: timeout
type: number
default: 120
jobs:
- job: Linux_Test_CPU${{ parameters.extra_job_id }}_${{ parameters.arch }}
timeoutInMinutes: ${{ parameters.timeout }}
variables:
skipComponentGovernanceDetection: true
workspace:
clean: all
pool: ${{ parameters.machine_pool }}
steps:
- checkout: self
clean: true
submodules: none
# The public ADO project
- ${{ if eq(variables['System.CollectionId'], 'f3ad12f2-e480-4533-baf2-635c95467d29') }}:
- download: current # pipeline resource identifier.
artifact: 'drop-linux-cpu-${{ parameters.arch }}'
- download: current # pipeline resource identifier.
artifact: 'onnxruntime${{ parameters.python_wheel_suffix }}'
- bash: |
set -e -x
mv "$(Pipeline.Workspace)/drop-linux-cpu-${{ parameters.arch }}" $(Build.BinariesDirectory)/${{parameters.cmake_build_type}}
mv "$(Pipeline.Workspace)/onnxruntime${{ parameters.python_wheel_suffix }}" "$(Build.BinariesDirectory)/whl"
cp -r "$(Build.BinariesDirectory)/whl" $(Build.BinariesDirectory)/tmp
find "$(Build.BinariesDirectory)/tmp" -name '*.whl' -exec bash -c 'unzip -d "${1%.*}" "$1"' _ {} \;
# The private ADO project
- ${{ if eq(variables['System.CollectionId'], 'bc038106-a83b-4dab-9dd3-5a41bc58f34c') }}:
- download: build # pipeline resource identifier.
artifact: 'drop-linux-cpu-${{ parameters.arch }}'
- download: build # pipeline resource identifier.
artifact: 'onnxruntime${{ parameters.python_wheel_suffix }}'
- bash: |
set -e -x
ls $(Pipeline.Workspace)/build
mv "$(Pipeline.Workspace)/build/drop-linux-cpu-${{ parameters.arch }}" $(Build.BinariesDirectory)/${{parameters.cmake_build_type}}
mv "$(Pipeline.Workspace)/build/onnxruntime${{ parameters.python_wheel_suffix }}" "$(Build.BinariesDirectory)/whl"
cp -r "$(Build.BinariesDirectory)/whl" $(Build.BinariesDirectory)/tmp
find "$(Build.BinariesDirectory)/tmp" -name '*.whl' -exec bash -c 'unzip -d "${1%.*}" "$1"' _ {} \;
# The BinSkim task uses a dotnet program which doesn't support ARM CPUs yet
- ${{ if eq(parameters.arch, 'x86_64') }}:
- task: BinSkim@4
displayName: 'Run BinSkim'
inputs:
AnalyzeTargetGlob: '$(Build.BinariesDirectory)/tmp/**/*.so'
continueOnError: true
#- task: PostAnalysis@2
# inputs:
# GdnBreakAllTools: true
# GdnBreakPolicy: M365
# GdnBreakPolicyMinSev: Error
- template: get-docker-image-steps.yml
parameters:
Dockerfile: tools/ci_build/github/linux/docker/inference/x64/python/cpu/Dockerfile.manylinux2_28_cpu
Context: tools/ci_build/github/linux/docker/inference/x64/python/cpu
DockerBuildArgs: "--build-arg POLICY=manylinux_2_28 --build-arg BUILD_UID=$( id -u ) --build-arg BASEIMAGE=${{ parameters.base_image }} --build-arg PLATFORM=${{ parameters.arch }} --build-arg PREPEND_PATH=${{ parameters.prepend_path }} --build-arg LD_LIBRARY_PATH_ARG=${{ parameters.ld_library_path_arg }} --build-arg DEVTOOLSET_ROOTPATH=${{ parameters.devtoolset_rootpath }}"
Repository: onnxruntimecpubuildpython${{ parameters.arch }}
${{ if eq(parameters.arch, 'aarch64') }}:
UpdateDepsTxt: false
- task: Bash@3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: tools/ci_build/github/linux/run_python_dockertest.sh
arguments: -d CPU -c ${{parameters.cmake_build_type}} -i onnxruntimecpubuildpython${{ parameters.arch }}
- task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3
displayName: 'Clean Agent Directories'
condition: always()

Просмотреть файл

@ -0,0 +1,98 @@
parameters:
- name: arch
type: string
- name: device
type: string
values:
- CPU
- GPU
- name: machine_pool
type: string
- name: extra_job_id
type: string
default: ''
- name: python_wheel_suffix
type: string
default: ''
# TODO: Ideally it should fetch information from the build that triggers it
- name: cmake_build_type
type: string
default: 'Release'
values:
- Debug
- Release
- RelWithDebInfo
- MinSizeRel
- name: timeout
type: number
default: 120
jobs:
- job: Linux_Test_GPU${{ parameters.extra_job_id }}_${{ parameters.arch }}
timeoutInMinutes: ${{ parameters.timeout }}
variables:
skipComponentGovernanceDetection: true
workspace:
clean: all
pool: ${{ parameters.machine_pool }}
steps:
- checkout: self
clean: true
submodules: none
# The public ADO project
# - ${{ if eq(variables['System.CollectionId'], 'f3ad12f2-e480-4533-baf2-635c95467d29') }}:
# The private ADO project
- ${{ if eq(variables['System.CollectionId'], 'bc038106-a83b-4dab-9dd3-5a41bc58f34c') }}:
- download: build # pipeline resource identifier.
artifact: 'drop-linux-gpu-${{ parameters.arch }}'
- download: build # pipeline resource identifier.
artifact: 'onnxruntime${{ parameters.python_wheel_suffix }}'
- bash: |
set -e -x
ls $(Pipeline.Workspace)/build
mv "$(Pipeline.Workspace)/build/drop-linux-gpu-${{ parameters.arch }}" $(Build.BinariesDirectory)/${{parameters.cmake_build_type}}
mv "$(Pipeline.Workspace)/build/onnxruntime${{ parameters.python_wheel_suffix }}" "$(Build.BinariesDirectory)/whl"
cp -r "$(Build.BinariesDirectory)/whl" $(Build.BinariesDirectory)/tmp
find "$(Build.BinariesDirectory)/tmp" -name '*.whl' -exec bash -c 'unzip -d "${1%.*}" "$1"' _ {} \;
# The BinSkim task uses a dotnet program which doesn't support ARM CPUs yet
- ${{ if eq(parameters.arch, 'x86_64') }}:
- task: BinSkim@4
displayName: 'Run BinSkim'
inputs:
AnalyzeTargetGlob: '$(Build.BinariesDirectory)/tmp/**/*.so'
continueOnError: true
#- task: PostAnalysis@2
# inputs:
# GdnBreakAllTools: true
# GdnBreakPolicy: M365
# GdnBreakPolicyMinSev: Error
- template: get-docker-image-steps.yml
parameters:
Dockerfile: tools/ci_build/github/linux/docker/Dockerfile.manylinux2_28_cuda11_8_tensorrt8_6
Context: tools/ci_build/github/linux/docker
DockerBuildArgs: "--network=host --build-arg POLICY=manylinux_2_28 --build-arg PLATFORM=x86_64 --build-arg PREPEND_PATH=/usr/local/cuda/bin --build-arg LD_LIBRARY_PATH_ARG=/usr/local/lib64 --build-arg DEVTOOLSET_ROOTPATH=/usr --build-arg BUILD_UID=$( id -u ) --build-arg PLATFORM=${{ parameters.arch }}"
Repository: onnxruntimecuda118xtrt86build${{ parameters.arch }}
- task: Bash@3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: tools/ci_build/github/linux/run_python_dockertest.sh
arguments: -d GPU -c ${{parameters.cmake_build_type}} -i onnxruntimecuda118xtrt86build${{ parameters.arch }}
- task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3
displayName: 'Clean Agent Directories'
condition: always()

Просмотреть файл

@ -1,85 +0,0 @@
parameters:
- name: arch
type: string
- name: device
type: string
- name: machine_pool
type: string
- name: extra_job_id
type: string
default: ''
- name: python_wheel_suffix
type: string
default: ''
# TODO: Ideally it should fetch information from the build that triggers it
- name: cmake_build_type
type: string
default: 'Release'
values:
- Debug
- Release
- RelWithDebInfo
- MinSizeRel
- name: timeout
type: number
default: 120
jobs:
- job: Linux_Test_${{ parameters.device }}${{ parameters.extra_job_id }}_${{ parameters.arch }}
timeoutInMinutes: ${{ parameters.timeout }}
variables:
skipComponentGovernanceDetection: true
workspace:
clean: all
pool: ${{ parameters.machine_pool }}
steps:
- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact'
inputs:
artifactName: 'drop-linux-${{ lower(parameters.device) }}-${{ parameters.arch }}'
targetPath: '$(Build.BinariesDirectory)/${{parameters.cmake_build_type}}'
# The public ADO project
${{ if eq(variables['System.CollectionId'], 'f3ad12f2-e480-4533-baf2-635c95467d29') }}:
buildType: current
# The private ADO project
${{ if eq(variables['System.CollectionId'], 'bc038106-a83b-4dab-9dd3-5a41bc58f34c') }}:
project: '530acbc4-21bc-487d-8cd8-348ff451d2ff'
definition: 841
preferTriggeringPipeline: true
runVersion: 'latest'
buildType: specific
- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Artifact'
inputs:
artifactName: 'onnxruntime${{ parameters.python_wheel_suffix }}'
targetPath: '$(Build.BinariesDirectory)/whl'
# The public ADO project
${{ if eq(variables['System.CollectionId'], 'f3ad12f2-e480-4533-baf2-635c95467d29') }}:
buildType: current
# The private ADO project
${{ if eq(variables['System.CollectionId'], 'bc038106-a83b-4dab-9dd3-5a41bc58f34c') }}:
project: '530acbc4-21bc-487d-8cd8-348ff451d2ff'
definition: 841
preferTriggeringPipeline: true
runVersion: 'latest'
buildType: specific
- task: Bash@3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: tools/ci_build/github/linux/run_python_tests.sh
arguments: -d ${{ parameters.device }} -c ${{parameters.cmake_build_type}}
- task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3
displayName: 'Clean Agent Directories'
condition: always()

Просмотреть файл

@ -1,9 +1,9 @@
ARG BASEIMAGE=amd64/almalinux:8
ARG BASEIMAGE=registry.access.redhat.com/ubi8/ubi
ARG POLICY=manylinux_2_28
ARG PLATFORM=x86_64
ARG DEVTOOLSET_ROOTPATH=/opt/rh/gcc-toolset-12/root
ARG LD_LIBRARY_PATH_ARG=${DEVTOOLSET_ROOTPATH}/usr/lib64:${DEVTOOLSET_ROOTPATH}/usr/lib:${DEVTOOLSET_ROOTPATH}/usr/lib64/dyninst:${DEVTOOLSET_ROOTPATH}/usr/lib/dyninst:/usr/local/lib64
ARG PREPEND_PATH=${DEVTOOLSET_ROOTPATH}/usr/bin:
ARG PREPEND_PATH=/usr/lib/jvm/msopenjdk-11/bin:${DEVTOOLSET_ROOTPATH}/usr/bin:
#Build manylinux2014 docker image begin
FROM $BASEIMAGE AS runtime_base
@ -26,7 +26,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -35,7 +34,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -137,9 +135,7 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \
build_scripts/requirements3.10.txt \
@ -156,6 +152,7 @@ CMD ["/bin/bash"]
#Build manylinux2014 docker image end
ENV PATH ${DEVTOOLSET_ROOTPATH}/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/manylinux/install_centos.sh

Просмотреть файл

@ -31,7 +31,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -40,7 +39,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -140,7 +138,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \
@ -156,7 +153,7 @@ ENV SSL_CERT_FILE=/opt/_internal/certs.pem
CMD ["/bin/bash"]
#Build manylinux2014 docker image end
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
#Add our own dependencies
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/manylinux/install_centos.sh && /tmp/scripts/manylinux/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -31,7 +31,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -40,7 +39,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -140,7 +138,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \
@ -163,7 +160,7 @@ RUN v="8.4.1-1.cuda11.6" &&\
yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo &&\
yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} \
libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v}
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
#Add our own dependencies
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/manylinux/install_centos.sh && /tmp/scripts/manylinux/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -31,7 +31,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -40,7 +39,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -140,7 +138,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \
@ -163,7 +160,7 @@ RUN v="8.5.1-1.cuda11.8" &&\
yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo &&\
yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} \
libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v}
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
#Add our own dependencies
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/manylinux/install_centos.sh && /tmp/scripts/manylinux/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -31,7 +31,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -40,7 +39,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -147,7 +145,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.7.txt \
build_scripts/requirements3.8.txt \
@ -171,7 +168,7 @@ RUN v="8.6.1.6-1.cuda11.8" &&\
yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo &&\
yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} libnvinfer-vc-plugin8-${v}\
libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} libnvinfer-vc-plugin-devel-${v} libnvinfer-headers-devel-${v} libnvinfer-headers-plugin-devel-${v}
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
#Add our own dependencies
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/manylinux/install_centos.sh && /tmp/scripts/manylinux/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -31,7 +31,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -40,7 +39,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -140,7 +138,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \

Просмотреть файл

@ -7,40 +7,30 @@
# Build base image with required system packages
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 AS base
# The local directory into which to build and install CMAKE
ARG ONNXRUNTIME_LOCAL_CODE_DIR=/code
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/src/tensorrt/bin:${ONNXRUNTIME_LOCAL_CODE_DIR}/cmake-3.27.3-linux-x86_64/bin:/opt/miniconda/bin:${PATH}
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/src/tensorrt/bin:${PATH}
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update &&\
apt-get install -y sudo git bash unattended-upgrades wget
RUN unattended-upgrade
apt-get install -y git bash wget
# Install python3
RUN apt-get install -y --no-install-recommends \
python3 \
python3-pip \
python3-dev \
python3-wheel &&\
cd /usr/local/bin &&\
ln -s /usr/bin/python3 python &&\
ln -s /usr/bin/pip3 pip;
python3-wheel
RUN pip install --upgrade pip
RUN pip install setuptools>=41.0.0
# Install TensorRT
RUN v="8.6.1.6-1+cuda11.8" &&\
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub &&\
apt-get update &&\
sudo apt-get install -y libnvinfer8=${v} libnvonnxparsers8=${v} libnvparsers8=${v} libnvinfer-plugin8=${v} libnvinfer-lean8=${v} libnvinfer-vc-plugin8=${v} libnvinfer-dispatch8=${v}\
apt-get install -y libnvinfer8=${v} libnvonnxparsers8=${v} libnvparsers8=${v} libnvinfer-plugin8=${v} libnvinfer-lean8=${v} libnvinfer-vc-plugin8=${v} libnvinfer-dispatch8=${v}\
libnvinfer-headers-dev=${v} libnvinfer-headers-plugin-dev=${v} libnvinfer-dev=${v} libnvonnxparsers-dev=${v} libnvparsers-dev=${v} libnvinfer-plugin-dev=${v} libnvinfer-lean-dev=${v} libnvinfer-vc-plugin-dev=${v} libnvinfer-dispatch-dev=${v}\
python3-libnvinfer=${v} libnvinfer-samples=${v} tensorrt-dev=${v} tensorrt-libs=${v}
# Install Valgrind
RUN apt-get install -y valgrind
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/install_dotnet.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -1,9 +1,8 @@
#!/bin/bash
set -e -x
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
echo "installing for CentOS version : $os_major_version"
dnf install -y glibc-langpack-\* glibc-locale-source which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel java-11-openjdk-devel graphviz gcc-toolset-12-binutils gcc-toolset-12-gcc gcc-toolset-12-gcc-c++ gcc-toolset-12-gcc-gfortran
locale
dnf install -y python39-devel glibc-langpack-\* glibc-locale-source which redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel java-11-openjdk-devel graphviz gcc-toolset-12-binutils gcc-toolset-12-gcc gcc-toolset-12-gcc-c++ gcc-toolset-12-gcc-gfortran
locale

Просмотреть файл

@ -14,20 +14,20 @@ function GetFile {
echo "File '$path' already exists. Skipping download"
return 0
else
rm -rf $path
rm -rf "$path"
fi
fi
if [[ -f $uri ]]; then
echo "'$uri' is a file path, copying file to '$path'"
cp $uri $path
cp "$uri" "$path"
return $?
fi
echo "Downloading $uri"
# Use aria2c if available, otherwise use curl
if command -v aria2c > /dev/null; then
aria2c -q -d $(dirname $path) -o $(basename $path) "$uri"
aria2c -q -d "$(dirname $path)" -o "$(basename $path)" "$uri"
else
curl "$uri" -sSL --retry $download_retries --retry-delay $retry_wait_time_seconds --create-dirs -o "$path" --fail
fi
@ -38,9 +38,10 @@ mkdir -p /tmp/src
cd /tmp/src
CPU_ARCH=$(uname -m)
echo "Installing cmake"
GetFile https://github.com/Kitware/CMake/releases/download/v3.27.3/cmake-3.27.3-linux-`uname -m`.tar.gz /tmp/src/cmake-3.27.3-linux-`uname -m`.tar.gz
tar -zxf /tmp/src/cmake-3.27.3-linux-`uname -m`.tar.gz --strip=1 -C /usr
GetFile "https://github.com/Kitware/CMake/releases/download/v3.27.3/cmake-3.27.3-linux-$CPU_ARCH.tar.gz" "/tmp/src/cmake.tar.gz"
tar -zxf /tmp/src/cmake.tar.gz --strip=1 -C /usr
echo "Installing Ninja"
GetFile https://github.com/ninja-build/ninja/archive/v1.10.0.tar.gz /tmp/src/ninja-linux.tar.gz
@ -52,7 +53,7 @@ mv ./build-cmake/ninja /usr/bin
popd
echo "Installing Node.js"
CPU_ARCH=`uname -m`
if [[ "$CPU_ARCH" = "x86_64" ]]; then
NODEJS_ARCH=x64
elif [[ "$CPU_ARCH" = "aarch64" ]]; then
@ -64,16 +65,5 @@ fi
GetFile https://nodejs.org/dist/v18.17.1/node-v18.17.1-linux-${NODEJS_ARCH}.tar.gz /tmp/src/node-v18.17.1-linux-${NODEJS_ARCH}.tar.gz
tar --strip 1 -xf /tmp/src/node-v18.17.1-linux-${NODEJS_ARCH}.tar.gz -C /usr
# The Python version in CentOS 7's python3 package is no longer supported (3.6) so we will build Python from source.
echo "Installing Python"
PYTHON_VERSION="3.8.17"
GetFile https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz /tmp/src/Python-${PYTHON_VERSION}.tgz
tar -zxf Python-${PYTHON_VERSION}.tgz
pushd Python-${PYTHON_VERSION}
./configure
make
make install
popd
cd /
rm -rf /tmp/src

Просмотреть файл

@ -5,10 +5,10 @@
ARG BASEIMAGE=amd64/almalinux:8
FROM $BASEIMAGE
ENV PATH /opt/rh/gcc-toolset-12/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV PATH /usr/lib/jvm/msopenjdk-11/bin:/opt/rh/gcc-toolset-12/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/install_centos.sh && /tmp/scripts/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -1,9 +1,9 @@
#!/bin/bash
set -e -x
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
echo "installing for CentOS version : $os_major_version"
dnf install -y python39-devel glibc-langpack-\* glibc-locale-source which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel java-11-openjdk-devel graphviz gcc-toolset-12-binutils gcc-toolset-12-gcc gcc-toolset-12-gcc-c++ gcc-toolset-12-gcc-gfortran
locale
rpm -Uvh https://packages.microsoft.com/config/centos/$os_major_version/packages-microsoft-prod.rpm
dnf install -y python39-devel glibc-langpack-\* glibc-locale-source which redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel msopenjdk-11 graphviz gcc-toolset-12-binutils gcc-toolset-12-gcc gcc-toolset-12-gcc-c++ gcc-toolset-12-gcc-gfortran
locale

Просмотреть файл

@ -4,8 +4,10 @@
# This file is used by Zip-Nuget Packaging NoContribOps Pipeline,Zip-Nuget-Java Packaging Pipeline
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubi8
ENV PATH /usr/lib/jvm/msopenjdk-11/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV JAVA_HOME=/usr/lib/jvm/msopenjdk-11
ADD scripts /tmp/scripts
RUN cd /tmp/scripts && /tmp/scripts/install_centos.sh && /tmp/scripts/install_deps.sh && rm -rf /tmp/scripts

Просмотреть файл

@ -1,9 +1,9 @@
#!/bin/bash
set -e -x
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
echo "installing for CentOS version : $os_major_version"
dnf install -y python39-devel python3-devel glibc-langpack-\* glibc-locale-source which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel java-11-openjdk-devel
locale
rpm -Uvh https://packages.microsoft.com/config/centos/$os_major_version/packages-microsoft-prod.rpm
dnf install -y python39-devel glibc-langpack-\* glibc-locale-source which redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel msopenjdk-11
locale

Просмотреть файл

@ -26,7 +26,6 @@ COPY build_scripts/fixup-mirrors.sh /usr/local/sbin/fixup-mirrors
# setup entrypoint, this will wrap commands with `linux32` with i686 images
COPY build_scripts/install-entrypoint.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
@ -35,7 +34,6 @@ COPY manylinux-entrypoint /usr/local/bin/manylinux-entrypoint
ENTRYPOINT ["manylinux-entrypoint"]
COPY build_scripts/install-runtime-packages.sh \
build_scripts/update-system-packages.sh \
build_scripts/build_utils.sh \
/build_scripts/
RUN manylinux-entrypoint /build_scripts/install-runtime-packages.sh && rm -rf /build_scripts/
@ -132,7 +130,6 @@ COPY --from=build_git /manylinux-rootfs /
COPY --from=build_cpython /manylinux-rootfs /
COPY --from=all_python /opt/_internal /opt/_internal/
COPY build_scripts/finalize.sh \
build_scripts/update-system-packages.sh \
build_scripts/python-tag-abi-tag.py \
build_scripts/requirements3.8.txt \
build_scripts/requirements3.9.txt \

Просмотреть файл

@ -1,11 +1,11 @@
#!/bin/bash
set -e
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
echo "installing for os major version : $os_major_version"
dnf install -y glibc-langpack-\*
yum install -y which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
yum install -y which redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
# export PATH=/opt/python/cp38-cp38/bin:$PATH
@ -17,4 +17,4 @@ mkdir build
cd build
cmake ..
cmake --install .
cd ../..
cd ../..

Просмотреть файл

@ -94,7 +94,7 @@ index 9ef1e99..ec52833 100755
+fi
\ No newline at end of file
diff --git a/install-runtime-packages.sh b/install-runtime-packages.sh
index 137d2e2..4269afb 100755
index 137d2e2..203b4bc 100755
--- a/install-runtime-packages.sh
+++ b/install-runtime-packages.sh
@@ -33,7 +33,7 @@ source $MY_DIR/build_utils.sh
@ -130,7 +130,7 @@ index 137d2e2..4269afb 100755
elif [ "${AUDITWHEEL_ARCH}" == "aarch64" ] || [ "${AUDITWHEEL_ARCH}" == "ppc64le" ] || [ "${AUDITWHEEL_ARCH}" == "s390x" ]; then
# Software collection (for devtoolset-10)
yum -y install centos-release-scl-rh
@@ -86,19 +88,18 @@ if [ "${AUDITWHEEL_POLICY}" == "manylinux2014" ]; then
@@ -86,19 +88,21 @@ if [ "${AUDITWHEEL_POLICY}" == "manylinux2014" ]; then
fi
elif [ "${AUDITWHEEL_POLICY}" == "manylinux_2_28" ]; then
PACKAGE_MANAGER=dnf
@ -148,6 +148,9 @@ index 137d2e2..4269afb 100755
- TOOLCHAIN_DEPS="gcc-toolset-12-binutils gcc-toolset-12-gcc gcc-toolset-12-gcc-c++ gcc-toolset-12-gcc-gfortran"
- if [ "${AUDITWHEEL_ARCH}" == "x86_64" ]; then
- TOOLCHAIN_DEPS="${TOOLCHAIN_DEPS} yasm"
+ if test -f "/etc/yum.repos.d/ubi.repo"; then
+ sed -i 's/enabled\s*=\s*1/enabled = 1\nexclude=dotnet* aspnet* netstandard*/g' /etc/yum.repos.d/ubi.repo
+ fi
+ if [[ -d /usr/local/cuda ]]; then
+ TOOLCHAIN_DEPS="gcc gcc-c++"
+ else
@ -155,7 +158,7 @@ index 137d2e2..4269afb 100755
fi
elif [ "${AUDITWHEEL_POLICY}" == "musllinux_1_1" ]; then
TOOLCHAIN_DEPS="binutils gcc g++ gfortran"
@@ -121,12 +122,6 @@ else
@@ -121,12 +125,6 @@ else
exit 1
fi

Просмотреть файл

@ -2,13 +2,15 @@
set -e -x
if [ -f /etc/redhat-release ]; then
dnf update --refresh -y \
&& dnf install -y dotnet-sdk-6.0
# If you found the following command went successfully but dotnet command still reports no sdk was found, most likely
# it was because the dotnet packages were installed from more than one dnf repos.
dnf install -y dotnet-sdk-6.0 dotnet-runtime-6.0
elif [ -f /etc/os-release ]; then
# Get Ubuntu version
declare repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '"'; fi)
declare repo_version
repo_version=$(if command -v lsb_release &> /dev/null; then lsb_release -r -s; else grep -oP '(?<=^VERSION_ID=).+' /etc/os-release | tr -d '"'; fi)
# Download Microsoft signing key and repository
wget https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
wget "https://packages.microsoft.com/config/ubuntu/$repo_version/packages-microsoft-prod.deb" -O packages-microsoft-prod.deb
# Install Microsoft signing key and repository
dpkg -i packages-microsoft-prod.deb
# Clean up

Просмотреть файл

@ -1,17 +1,18 @@
#!/bin/bash
set -e
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
echo "installing for os major version : $os_major_version"
if [ "$os_major_version" -gt 7 ]; then
PACKAGE_MANAGER="dnf"
$PACKAGE_MANAGER install -y which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
$PACKAGE_MANAGER install -y which redhat-lsb-core expat-devel tar unzip zlib-devel make bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
else
PACKAGE_MANAGER="yum"
$PACKAGE_MANAGER install -y which gdb redhat-lsb-core expat-devel tar unzip zlib-devel make libunwind bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
$PACKAGE_MANAGER install -y which redhat-lsb-core expat-devel tar unzip zlib-devel make libunwind bzip2 bzip2-devel perl-IPC-Cmd openssl-devel wget
fi
rpm -Uvh https://packages.microsoft.com/config/centos/$os_major_version/packages-microsoft-prod.rpm
# Install Java
# Install automatic documentation generation dependencies
$PACKAGE_MANAGER install -y java-11-openjdk-devel graphviz
$PACKAGE_MANAGER install -y msopenjdk-11 graphviz

Просмотреть файл

@ -3,18 +3,20 @@ set -e -x
# Development tools and libraries
if [ -f /etc/redhat-release ]; then
yum update && yum -y install graphviz
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
dnf -y install graphviz
elif [ -f /etc/os-release ]; then
apt-get update && apt-get install -y graphviz
os_major_version=$(cat /etc/os-release | tr -dc '0-9.'|cut -d \. -f1)
else
echo "Unsupported OS"
exit 1
fi
# Install dotnet
source $(cd "$(dirname "${BASH_SOURCE[0]}")/.." &> /dev/null && pwd)/install_dotnet.sh
LOCAL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)"
PARENT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." &> /dev/null && pwd)"
# ShellCheck is unable to follow dynamic paths, such as source "$somedir/file".
# shellcheck disable=SC1091
source "$PARENT_DIR/install_dotnet.sh"
if [ ! -d "/opt/conda/bin" ]; then
PYTHON_EXES=("/opt/python/cp38-cp38/bin/python3.8" "/opt/python/cp39-cp39/bin/python3.9" "/opt/python/cp310-cp310/bin/python3.10" "/opt/python/cp311-cp311/bin/python3.11")
@ -22,23 +24,17 @@ else
PYTHON_EXES=("/opt/conda/bin/python")
fi
SYS_LONG_BIT=$(getconf LONG_BIT)
mkdir -p /tmp/src
GLIBC_VERSION=$(getconf GNU_LIBC_VERSION | cut -f 2 -d \.)
if [[ $SYS_LONG_BIT = "64" ]]; then
LIBDIR="lib64"
else
LIBDIR="lib"
fi
cd /tmp/src
source $(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)/install_shared_deps.sh
# shellcheck disable=SC1091
source "$LOCAL_DIR/install_shared_deps.sh"
cd /tmp/src
if ! [ -x "$(command -v protoc)" ]; then
source ${0/%install_deps.sh/..\/install_protobuf.sh}
# shellcheck disable=SC1091
source "$PARENT_DIR/install_protobuf.sh"
fi
export ONNX_ML=1
@ -46,7 +42,7 @@ export CMAKE_ARGS="-DONNX_GEN_PB_TYPE_STUBS=OFF -DONNX_WERROR=OFF"
for PYTHON_EXE in "${PYTHON_EXES[@]}"
do
${PYTHON_EXE} -m pip install -r ${0/%install_deps\.sh/requirements\.txt}
${PYTHON_EXE} -m pip install -r "${0/%install_deps\.sh/requirements\.txt}"
done
cd /

Просмотреть файл

@ -11,7 +11,7 @@ else
PYTHON_EXES=("/opt/conda/bin/python")
fi
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
SYS_LONG_BIT=$(getconf LONG_BIT)
mkdir -p /tmp/src

Просмотреть файл

@ -11,7 +11,7 @@ else
PYTHON_EXES=("/opt/conda/bin/python")
fi
os_major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
os_major_version=$(tr -dc '0-9.' < /etc/redhat-release |cut -d \. -f1)
SYS_LONG_BIT=$(getconf LONG_BIT)
mkdir -p /tmp/src

Просмотреть файл

@ -0,0 +1,29 @@
#!/bin/bash
set -e -x
BUILD_CONFIG="Release"
while getopts "i:d:x:c:" parameter_Option
do case "${parameter_Option}"
in
i) DOCKER_IMAGE=${OPTARG};;
d) DEVICE=${OPTARG};;
c) BUILD_CONFIG=${OPTARG};;
esac
done
if [ $DEVICE = "GPU" ]; then
ADDITIONAL_DOCKER_PARAMETER="--gpus all"
fi
mkdir -p $HOME/.onnx
docker run --rm \
--volume /data/onnx:/data/onnx:ro \
--volume $BUILD_SOURCESDIRECTORY:/onnxruntime_src \
--volume $BUILD_BINARIESDIRECTORY:/build \
--volume /data/models:/build/models:ro \
--volume $HOME/.onnx:/home/onnxruntimedev/.onnx \
-w /onnxruntime_src \
-e NIGHTLY_BUILD \
-e BUILD_BUILDNUMBER \
$ADDITIONAL_DOCKER_PARAMETER \
$DOCKER_IMAGE tools/ci_build/github/linux/run_python_tests.sh -d $DEVICE -c $BUILD_CONFIG

Просмотреть файл

@ -15,7 +15,8 @@ c) BUILD_CONFIG=${OPTARG};;
esac
done
cd $BUILD_BINARIESDIRECTORY
export PATH=/opt/python/cp38-cp38/bin:$PATH
cd /build
files=(whl/*.whl)
FILE_NAME="${files[0]}"
FILE_NAME=$(basename $FILE_NAME)
@ -23,7 +24,7 @@ PYTHON_PACKAGE_NAME=$(echo "$FILE_NAME" | cut -f 1 -d '-')
echo "Package name:$PYTHON_PACKAGE_NAME"
BUILD_ARGS="--build_dir $BUILD_BINARIESDIRECTORY --config $BUILD_CONFIG --test --skip_submodule_sync --parallel --enable_lto --build_wheel "
BUILD_ARGS="--build_dir /build --config $BUILD_CONFIG --test --skip_submodule_sync --parallel --enable_lto --build_wheel "
ARCH=$(uname -m)
@ -35,20 +36,15 @@ if [ $BUILD_DEVICE == "GPU" ]; then
BUILD_ARGS="$BUILD_ARGS --use_cuda --use_tensorrt --cuda_version=11.8 --tensorrt_home=/usr --cuda_home=/usr/local/cuda-11.8 --cudnn_home=/usr/local/cuda-11.8"
fi
# We assume the machine doesn't have gcc and python development header files, so we don't build onnxruntime from source
sudo rm -rf /build /onnxruntime_src
sudo ln -s $BUILD_SOURCESDIRECTORY /onnxruntime_src
python3 -m pip install --upgrade pip
python3 -m pip uninstall -y $PYTHON_PACKAGE_NAME ort-nightly-gpu ort-nightly onnxruntime onnxruntime-gpu onnxruntime-training onnxruntime-directml ort-nightly-directml onnx -qq
# Install the packages that are needed for installing the onnxruntime python package
python3 -m pip install -r $BUILD_BINARIESDIRECTORY/$BUILD_CONFIG/requirements.txt
python3 -m pip install -r /build/$BUILD_CONFIG/requirements.txt
# Install the packages that are needed for running test scripts
# Install the latest ONNX release which may contain not fixed bugs. However, it is what most people use.
python3 -m pip install onnx pytest
python3 -m pip install pytest
# The "--no-index" flag is crucial. The local whl folder is just an additional source. Pypi's doc says "there is no
# ordering in the locations that are searched" if we don't disable the default one with "--no-index"
python3 -m pip install --no-index --find-links $BUILD_BINARIESDIRECTORY/whl $PYTHON_PACKAGE_NAME
ln -s /data/models $BUILD_BINARIESDIRECTORY
cd $BUILD_BINARIESDIRECTORY/$BUILD_CONFIG
python3 -m pip install --no-index --find-links /build/whl $PYTHON_PACKAGE_NAME
cd /build/$BUILD_CONFIG
# Restore file permissions
xargs -a perms.txt chmod a+x
python3 $BUILD_SOURCESDIRECTORY/tools/ci_build/build.py $BUILD_ARGS --ctest_path ''
python3 /onnxruntime_src/tools/ci_build/build.py $BUILD_ARGS --ctest_path ''

0
tools/scripts/python_test.sh Normal file → Executable file
Просмотреть файл

0
tools/scripts/symbolic_shape_infer_test.sh Normal file → Executable file
Просмотреть файл