Merge branch 'main' into prevent-rct18nutil-deadlock

This commit is contained in:
Adam Gleitman 2022-08-23 16:09:38 -07:00 коммит произвёл GitHub
Родитель 0890b47d2b 96be39d8cd
Коммит 4161f9323f
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
2146 изменённых файлов: 3772 добавлений и 793270 удалений

Просмотреть файл

@ -37,6 +37,8 @@ jobs:
echo "##vso[task.setvariable variable=package_version]$(cat package.json | jq .version | awk '{ print substr($0, 2, length($0) - 2) }')"
echo "##vso[task.setvariable variable=react_version]$(cat package.json | jq .peerDependencies.react)"
echo "##vso[task.setvariable variable=rncli_version]$(cat package.json | jq '.dependencies."@react-native-community/cli"')"
echo "##vso[task.setvariable variable=rncli_android_version]$(cat package.json | jq '.dependencies."@react-native-community/cli-platform-android"')"
echo "##vso[task.setvariable variable=rncli_ios_version]$(cat package.json | jq '.dependencies."@react-native-community/cli-platform-ios"')"
displayName: 'Determine react-native-macos version'
- bash: |
npm pack
@ -62,8 +64,8 @@ jobs:
set -eo pipefail
cat package.json |
jq '.devDependencies["@react-native-community/cli"] = $(rncli_version)' |
jq '.devDependencies["@react-native-community/cli-platform-android"] = $(rncli_version)' |
jq '.devDependencies["@react-native-community/cli-platform-ios"] = $(rncli_version)' |
jq '.devDependencies["@react-native-community/cli-platform-android"] = $(rncli_android_version)' |
jq '.devDependencies["@react-native-community/cli-platform-ios"] = $(rncli_ios_version)' |
jq '.devDependencies["react"] = $(react_version)' |
jq '.devDependencies["react-native"] = "^0.64"' |
jq '.devDependencies["react-native-macos"] = "../../react-native-macos-$(package_version).tgz"' |

Просмотреть файл

@ -147,7 +147,7 @@ jobs:
inputs:
script: |
cd packages/react-native-macos-init
yarn install
yarn install --frozen-lockfile
- task: CmdLine@2
displayName: yarn build

Просмотреть файл

@ -12,8 +12,10 @@ steps:
slice_name: ${{ parameters.slice_name }}
xcode_version: ${{ parameters.xcode_version }}
- script: 'yarn install'
displayName: 'yarn install'
- task: CmdLine@2
displayName: yarn install
inputs:
script: yarn install --frozen-lockfile
- task: CmdLine@2
displayName: yarn test-ci [test]

Просмотреть файл

@ -27,7 +27,7 @@ steps:
- task: CmdLine@2
displayName: yarn install
inputs:
script: yarn install
script: yarn install --frozen-lockfile
- task: CmdLine@2
displayName: pod install

Просмотреть файл

@ -1,81 +0,0 @@
# Docker Test Environment
This is a high-level overview of the test configuration using Docker.
It explains how to run the tests locally.
## Docker Installation
It is required to have Docker running on your machine in order to build and run the tests in the Dockerfiles.
See <https://docs.docker.com/engine/installation/> for more information on how to install.
## Convenience Scripts
We have added a number of default run scripts to the `package.json` file to simplify building and running your tests.
### Configuring Docker Images
The following two scripts need to be run first before you can move on to testing:
- `yarn run docker-setup-android`: Pulls down the React Native Community Android image that serves as a base image when building the actual test image.
- `yarn run docker-build-android`: Builds a test image with the latest dependencies and React Native library code, including a compiled Android test app.
### Running Tests
Once the test image has been built, it can be used to run our Android tests.
- `yarn run test-android-run-unit` runs the unit tests, as defined in `scripts/run-android-docker-unit-tests.sh`.
- `yarn run test-android-run-e2e` runs the end to end tests, as defined in `scripts/run-ci-e2e-tests.sh`.
- `yarn run test-android-run-instrumentation` runs the instrumentation tests, as defined in `scripts/run-android-docker-instrumentation-tests.sh`.
#### Instrumentation Tests
The instrumentation test script accepts the following flags in order to customize the execution of the tests:
`--filter` - A regex that filters which instrumentation tests will be run. (Defaults to .*)
`--package` - Name of the java package containing the instrumentation tests (Defaults to com.facebook.react.tests)
`--path` - Path to the directory containing the instrumentation tests. (Defaults to ./ReactAndroid/src/androidTest/java/com/facebook/react/tests)
`--retries` - Number of times to retry a failed test before declaring a failure (Defaults to 2)
For example, if locally you only wanted to run the InitialPropsTestCase, you could do the following:
`yarn run test-android-run-instrumentation -- --filter="InitialPropsTestCase"`
## Detailed Android Setup
There are two Dockerfiles for use with the Android codebase.
The base image used to build `reactnativecommunity/react-native-android` is located in the https://github.com/react-native-community/docker-android GitHub repository.
It contains all the necessary prerequisites required to run the React Android tests.
It is separated out into a separate Dockerfile because these are dependencies that rarely change and also because it is quite a beastly image since it contains all the Android dependencies for running Android and the emulators (~9GB).
The good news is you should rarely have to build or pull down the base image!
All iterative code updates happen as part of the `Dockerfile.android` image build.
Lets break it down...
First, you'll need to pull the base image.
You can use `docker pull` to grab the latest version of the `reactnativecommunity/react-native-android` base image.
This is what you get when you run `yarn run docker-setup-android`.
This will take quite some time depending on your connection and you need to ensure you have ~10GB of free disk space.
Once you have downloaded the base image, the test image can be built using `docker build -t reactnativeci/android -f ./.circleci/Dockerfiles/Dockerfile.android .`. This is what `yarn run docker-build-android` does. Note that the `-t` flag is how you tell Docker what to name this image locally. You can then use `docker run -t reactnativeci/android` to run this image.
Now that you've built the test image, you can run unit tests using what you've learned so far:
```bash
docker run --cap-add=SYS_ADMIN -it reactnativeci/android bash .circleci/Dockerfiles/scripts/run-android-docker-unit-tests.sh
```
> Note: `--cap-add=SYS_ADMIN` flag is required for the `.circleci/Dockerfiles/scripts/run-android-docker-unit-tests.sh` and `.circleci/Dockerfiles/scripts/run-android-docker-instrumentation-tests.sh` in order to allow the remounting of `/dev/shm` as writeable so the `buck` build system may write temporary output to that location.
Every time you make any modifications to the codebase, including changes to the test scripts inside `.circleci/Dockerfiles/scripts`, you should re-run the `docker build ...` command in order for your updates to be included in your local docker test image.
For rapid iteration, it's useful to keep in mind that Docker can pass along arbitrary commands to an image.
For example, you can alternatively use Gradle in this manner:
```bash
docker run --cap-add=SYS_ADMIN -it reactnativeci/android ./gradlew RNTester:android:app:assembleRelease
```

Просмотреть файл

@ -1,85 +0,0 @@
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
#
# This image builds upon the React Native Community Android image:
# https://github.com/react-native-community/docker-android
#
# The base image is expected to remain relatively stable, and only
# needs to be updated when major dependencies such as the Android
# SDK or NDK are updated.
#
# In this Android Test image, we download the latest dependencies
# and build a Android application that can be used to run the
# tests specified in the scripts/ directory.
#
# For compliance reasons Microsoft cannot depend on docker hub.
# the reactnative community image is not published to MCR (https://mcr.microsoft.com/)
# If we need to run these validations we can clone the repo that publishes the community image
# Patch it to use mcr for the base ubuntu image, build the community image locally
# and then build this customized step on top of it.
#
# Disabeling this is okay because per (macOS GH#774) this test, which is redundant to Azure Devops test,
# failsin the fork because of Microsoft's V8 upgrade to Android
#
#FROM reactnativecommunity/react-native-android:5.2
LABEL Description="React Native Android Test Image"
LABEL maintainer="Héctor Ramos <hector@fb.com>"
ARG BUCK_BUILD
# set default environment variables
ENV GRADLE_OPTS="-Dorg.gradle.daemon=false -Dorg.gradle.jvmargs=\"-Xmx512m -XX:+HeapDumpOnOutOfMemoryError\""
ENV JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF8"
ADD .buckconfig /app/.buckconfig
ADD .buckjavaargs /app/.buckjavaargs
ADD BUCK /app/BUCK
ADD Libraries /app/Libraries
ADD ReactAndroid /app/ReactAndroid
ADD ReactCommon /app/ReactCommon
ADD React /app/React
ADD keystores /app/keystores
ADD packages/react-native-codegen /app/packages/react-native-codegen
ADD tools /app/tools
# add third party dependencies
ADD Folly /app/Folly
ADD glog /app/glog
ADD double-conversion /app/double-conversion
ADD jsc /app/jsc
# set workdir
WORKDIR /app
RUN buck fetch ReactAndroid/src/test/java/com/facebook/react/modules
RUN buck fetch ReactAndroid/src/main/java/com/facebook/react
RUN buck fetch ReactAndroid/src/main/java/com/facebook/react/shell
RUN buck fetch ReactAndroid/src/test/...
RUN buck fetch ReactAndroid/src/androidTest/...
RUN buck build ReactAndroid/src/main/java/com/facebook/react
RUN buck build ReactAndroid/src/main/java/com/facebook/react/shell
ADD gradle /app/gradle
ADD gradlew /app/gradlew
ADD settings.gradle /app/settings.gradle
ADD build.gradle /app/build.gradle
ADD react.gradle /app/react.gradle
# run gradle downloads
RUN ./gradlew :ReactAndroid:downloadBoost # :ReactAndroid:downloadDoubleConversion :ReactAndroid:downloadFolly :ReactAndroid:downloadGlog :ReactAndroid:downloadJSC
# compile native libs with Gradle script, we need bridge for unit and integration tests
RUN ./gradlew :ReactAndroid:packageReactNdkLibsForBuck -Pjobs=1
# add all react-native code
ADD . /app
RUN yarn
RUN ./gradlew :ReactAndroid:downloadBoost :ReactAndroid:downloadDoubleConversion :ReactAndroid:downloadFolly :ReactAndroid:downloadGlog
RUN ./gradlew :ReactAndroid:packageReactNdkLibsForBuck -Pjobs=1

Просмотреть файл

@ -1,159 +0,0 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
'use strict';
/**
* This script runs instrumentation tests one by one with retries
* Instrumentation tests tend to be flaky, so rerunning them individually increases
* chances for success and reduces total average execution time.
*
* We assume that all instrumentation tests are flat in one folder
* Available arguments:
* --path - path to all .java files with tests
* --package - com.facebook.react.tests
* --retries [num] - how many times to retry possible flaky commands: npm install and running tests, default 1
*/
const argv = require('yargs').argv;
const async = require('async');
const child_process = require('child_process');
const fs = require('fs');
const path = require('path');
const colors = {
GREEN: '\x1b[32m',
RED: '\x1b[31m',
RESET: '\x1b[0m',
};
const test_opts = {
FILTER: new RegExp(argv.filter || '.*', 'i'),
IGNORE: argv.ignore || null,
PACKAGE: argv.package || 'com.facebook.react.tests',
PATH: argv.path || './ReactAndroid/src/androidTest/java/com/facebook/react/tests',
RETRIES: parseInt(argv.retries || 2, 10),
TEST_TIMEOUT: parseInt(argv['test-timeout'] || 1000 * 60 * 10, 10),
OFFSET: argv.offset,
COUNT: argv.count,
};
let max_test_class_length = Number.NEGATIVE_INFINITY;
let testClasses = fs.readdirSync(path.resolve(process.cwd(), test_opts.PATH))
.filter((file) => {
return file.endsWith('.java');
}).map((clazz) => {
return path.basename(clazz, '.java');
});
if (test_opts.IGNORE) {
test_opts.IGNORE = new RegExp(test_opts.IGNORE, 'i');
testClasses = testClasses.filter(className => {
return !test_opts.IGNORE.test(className);
});
}
testClasses = testClasses.map((clazz) => {
return test_opts.PACKAGE + '.' + clazz;
}).filter((clazz) => {
return test_opts.FILTER.test(clazz);
});
// only process subset of the tests at corresponding offset and count if args provided
if (test_opts.COUNT != null && test_opts.OFFSET != null) {
const start = test_opts.COUNT * test_opts.OFFSET;
const end = start + test_opts.COUNT;
if (start >= testClasses.length) {
testClasses = [];
} else if (end >= testClasses.length) {
testClasses = testClasses.slice(start);
} else {
testClasses = testClasses.slice(start, end);
}
}
async.mapSeries(testClasses, (clazz, callback) => {
if (clazz.length > max_test_class_length) {
max_test_class_length = clazz.length;
}
return async.retry(test_opts.RETRIES, (retryCb) => {
const test_process = child_process.spawn('./.circleci/Dockerfiles/scripts/run-instrumentation-tests-via-adb-shell.sh', [test_opts.PACKAGE, clazz], {
stdio: 'inherit',
});
const timeout = setTimeout(() => {
test_process.kill();
}, test_opts.TEST_TIMEOUT);
test_process.on('error', (err) => {
clearTimeout(timeout);
retryCb(err);
});
test_process.on('exit', (code) => {
clearTimeout(timeout);
if (code !== 0) {
return retryCb(new Error(`Process exited with code: ${code}`));
}
return retryCb();
});
}, (err) => {
return callback(null, {
name: clazz,
status: err ? 'failure' : 'success',
});
});
}, (err, results) => {
print_test_suite_results(results);
const failures = results.filter((test) => {
return test.status === 'failure';
});
return failures.length === 0 ? process.exit(0) : process.exit(1);
});
function print_test_suite_results(results) {
console.log('\n\nTest Suite Results:\n');
let color;
let failing_suites = 0;
let passing_suites = 0;
function pad_output(num_chars) {
let i = 0;
while (i < num_chars) {
process.stdout.write(' ');
i++;
}
}
results.forEach((test) => {
if (test.status === 'success') {
color = colors.GREEN;
passing_suites++;
} else if (test.status === 'failure') {
color = colors.RED;
failing_suites++;
}
process.stdout.write(color);
process.stdout.write(test.name);
pad_output((max_test_class_length - test.name.length) + 8);
process.stdout.write(test.status);
process.stdout.write(`${colors.RESET}\n`);
});
console.log(`\n${passing_suites} passing, ${failing_suites} failing!`);
}

Просмотреть файл

@ -1,40 +0,0 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# for buck gen
mount -o remount,exec /dev/shm
AVD_UUID=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 8 | head -n 1)
# create virtual device
echo no | android create avd -n "$AVD_UUID" -f -t android-21 --abi default/armeabi-v7a
# emulator setup
emulator64-arm -avd $AVD_UUID -no-skin -no-audio -no-window -no-boot-anim &
bootanim=""
until [[ "$bootanim" =~ "stopped" ]]; do
sleep 5
bootanim=$(adb -e shell getprop init.svc.bootanim 2>&1)
echo "boot animation status=$bootanim"
done
set -x
# solve issue with max user watches limit
echo 65536 | tee -a /proc/sys/fs/inotify/max_user_watches
watchman shutdown-server
# integration tests
# build JS bundle for instrumentation tests
node cli.js bundle --platform android --dev true --entry-file ReactAndroid/src/androidTest/js/TestBundle.js --bundle-output ReactAndroid/src/androidTest/assets/AndroidTestBundle.js
# build test APK
# shellcheck disable=SC1091
source ./scripts/android-setup.sh && NO_BUCKD=1 retry3 buck install ReactAndroid/src/androidTest/buck-runner:instrumentation-tests --config build.threads=1
# run installed apk with tests
node ./.circleci/Dockerfiles/scripts/run-android-ci-instrumentation-tests.js "$*"
exit $?

Просмотреть файл

@ -1,16 +0,0 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# set default environment variables
UNIT_TESTS_BUILD_THREADS="${UNIT_TESTS_BUILD_THREADS:-1}"
# for buck gen
mount -o remount,exec /dev/shm
set -x
# run unit tests
buck test ReactAndroid/src/test/... --config build.threads="$UNIT_TESTS_BUILD_THREADS"

Просмотреть файл

@ -1,251 +0,0 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
set -ex
# set default environment variables
ROOT=$(pwd)
SCRIPTS=$(pwd)/scripts
RUN_ANDROID=0
RUN_CLI_INSTALL=1
RUN_IOS=0
RUN_JS=0
RETRY_COUNT=${RETRY_COUNT:-2}
AVD_UUID=$(< /dev/urandom tr -dc 'a-zA-Z0-9' | fold -w 8 | head -n 1)
ANDROID_NPM_DEPS="appium@1.5.1 mocha@2.4.5 wd@0.3.11 colors@1.0.3 pretty-data2@0.40.1"
CLI_PACKAGE="$ROOT/react-native-cli/react-native-cli-*.tgz"
PACKAGE="$ROOT/react-native-*.tgz"
# solve issue with max user watches limit
echo 65536 | tee -a /proc/sys/fs/inotify/max_user_watches
watchman shutdown-server
# retries command on failure
# $1 -- max attempts
# $2 -- command to run
function retry() {
local -r -i max_attempts="$1"; shift
local -r cmd="$*"
local -i attempt_num=1
until $cmd; do
if (( attempt_num == max_attempts )); then
echo "Execution of '$cmd' failed; no more attempts left"
return 1
else
(( attempt_num++ ))
echo "Execution of '$cmd' failed; retrying for attempt number $attempt_num..."
fi
done
}
# parse command line args & flags
while :; do
case "$1" in
--android)
RUN_ANDROID=1
shift
;;
--ios)
RUN_IOS=1
shift
;;
--js)
RUN_JS=1
shift
;;
--skip-cli-install)
RUN_CLI_INSTALL=0
shift
;;
*)
break
esac
done
function e2e_suite() {
cd "$ROOT"
if [ $RUN_ANDROID -eq 0 ] && [ $RUN_IOS -eq 0 ] && [ $RUN_JS -eq 0 ]; then
echo "No e2e tests specified!"
return 0
fi
# create temp dir
TEMP_DIR=$(mktemp -d /tmp/react-native-XXXXXXXX)
# To make sure we actually installed the local version
# of react-native, we will create a temp file inside the template
# and check that it exists after `react-native init
IOS_MARKER="$(mktemp "$ROOT"/template/ios/HelloWorld/XXXXXXXX)"
ANDROID_MARKER="$(mktemp "$ROOT"/template/android/XXXXXXXX)"
# install CLI
cd react-native-cli
npm pack
cd ..
# can skip cli install for non sudo mode
if [ $RUN_CLI_INSTALL -ne 0 ]; then
if ! npm install -g "$CLI_PACKAGE"
then
echo "Could not install react-native-cli globally, please run in su mode"
echo "Or with --skip-cli-install to skip this step"
return 1
fi
fi
if [ $RUN_ANDROID -ne 0 ]; then
set +ex
# create virtual device
if ! android list avd | grep "$AVD_UUID" > /dev/null; then
echo no | android create avd -n "$AVD_UUID" -f -t android-21 --abi default/armeabi-v7a
fi
# newline at end of adb devices call and first line is headers
DEVICE_COUNT=$(adb devices | wc -l)
((DEVICE_COUNT -= 2))
# will always kill an existing emulator if one exists for fresh setup
if [[ $DEVICE_COUNT -ge 1 ]]; then
adb emu kill
fi
# emulator setup
emulator64-arm -avd "$AVD_UUID" -no-skin -no-audio -no-window -no-boot-anim &
bootanim=""
# shellcheck disable=SC2076
until [[ "$bootanim" =~ "stopped" ]]; do
sleep 5
bootanim=$(adb -e shell getprop init.svc.bootanim 2>&1)
echo "boot animation status=$bootanim"
done
set -ex
if ! ./gradlew :ReactAndroid:installArchives -Pjobs=1 -Dorg.gradle.jvmargs="-Xmx512m -XX:+HeapDumpOnOutOfMemoryError"
then
echo "Failed to compile Android binaries"
return 1
fi
fi
if ! npm pack
then
echo "Failed to pack react-native"
return 1
fi
cd "$TEMP_DIR"
if ! retry "$RETRY_COUNT" react-native init EndToEndTest --version "$PACKAGE" --npm
then
echo "Failed to execute react-native init"
echo "Most common reason is npm registry connectivity, try again"
return 1
fi
cd EndToEndTest
# android tests
if [ $RUN_ANDROID -ne 0 ]; then
echo "Running an Android e2e test"
echo "Installing e2e framework"
if ! retry "$RETRY_COUNT" npm install --save-dev "$ANDROID_NPM_DEPS" --silent >> /dev/null
then
echo "Failed to install appium"
echo "Most common reason is npm registry connectivity, try again"
return 1
fi
cp "$SCRIPTS/android-e2e-test.js" android-e2e-test.js
(
cd android || exit
echo "Downloading Maven deps"
./gradlew :app:copyDownloadableDepsToLibs
)
keytool -genkey -v -keystore android/keystores/debug.keystore -storepass android -alias androiddebugkey -keypass android -dname "CN=Android Debug,O=Android,C=US"
node ./node_modules/.bin/appium >> /dev/null &
APPIUM_PID=$!
echo "Starting appium server $APPIUM_PID"
echo "Building app"
buck build android/app
# hack to get node unhung (kill buckd)
if ! kill -9 "$(pgrep java)"
then
echo "could not execute Buck build, is it installed and in PATH?"
return 1
fi
echo "Starting Metro"
npm start >> /dev/null &
SERVER_PID=$!
sleep 15
echo "Executing android e2e test"
if ! retry "$RETRY_COUNT" node node_modules/.bin/_mocha android-e2e-test.js
then
echo "Failed to run Android e2e tests"
echo "Most likely the code is broken"
return 1
fi
# kill packager process
if kill -0 "$SERVER_PID"; then
echo "Killing packager $SERVER_PID"
kill -9 "$SERVER_PID"
fi
# kill appium process
if kill -0 "$APPIUM_PID"; then
echo "Killing appium $APPIUM_PID"
kill -9 "$APPIUM_PID"
fi
fi
# ios tests
if [ $RUN_IOS -ne 0 ]; then
echo "Running ios e2e tests not yet implemented for docker!"
fi
# js tests
if [ $RUN_JS -ne 0 ]; then
# Check the packager produces a bundle (doesn't throw an error)
if ! react-native bundle --max-workers 1 --platform android --dev true --entry-file index.js --bundle-output android-bundle.js
then
echo "Could not build android bundle"
return 1
fi
if ! react-native bundle --max-workers 1 --platform ios --dev true --entry-file index.js --bundle-output ios-bundle.js
then
echo "Could not build iOS bundle"
return 1
fi
# directory cleanup
rm "$IOS_MARKER"
rm "$ANDROID_MARKER"
return 0
}
retry "$RETRY_COUNT" e2e_suite

Просмотреть файл

@ -1,70 +0,0 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# shellcheck disable=SC1117
# Python script to run instrumentation tests, copied from https://github.com/circleci/circle-dummy-android
# Example: ./scripts/run-android-instrumentation-tests.sh com.facebook.react.tests com.facebook.react.tests.ReactPickerTestCase
#
export PATH="$ANDROID_HOME/platform-tools:$ANDROID_HOME/tools:$PATH"
# clear the logs
adb logcat -c
# run tests and check output
python - "$1" "$2" << END
import re
import subprocess as sp
import sys
import threading
import time
done = False
test_app = sys.argv[1]
test_class = None
if len(sys.argv) > 2:
test_class = sys.argv[2]
def update():
# prevent CircleCI from killing the process for inactivity
while not done:
time.sleep(5)
print "Running in background. Waiting for 'adb' command response..."
t = threading.Thread(target=update)
t.dameon = True
t.start()
def run():
sp.Popen(['adb', 'wait-for-device']).communicate()
if (test_class != None):
p = sp.Popen('adb shell am instrument -w -e class %s %s/android.support.test.runner.AndroidJUnitRunner'
% (test_class, test_app), shell=True, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE)
else :
p = sp.Popen('adb shell am instrument -w %s/android.support.test.runner.AndroidJUnitRunner'
% (test_app), shell=True, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE)
return p.communicate()
success = re.compile(r'OK \(\d+ test(s)?\)')
stdout, stderr = run()
done = True
print stderr
print stdout
if success.search(stderr + stdout):
sys.exit(0)
else:
# dump the logs
sp.Popen(['adb', 'logcat', '-d']).communicate()
sys.exit(1) # make sure we fail if the test failed
END
RETVAL=$?
exit $RETVAL

Просмотреть файл

@ -1,5 +0,0 @@
# Circle CI
This directory is home to the Circle CI configuration file. Circle is our continuous integration service provider. You can see the overall status of React Native's builds at https://circleci.com/gh/facebook/react-native
You may also see an individual PR's build status by scrolling down to the Checks section in the PR.

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,103 +0,0 @@
[ignore]
; We fork some components by platform
.*/*[.]ios.js
; Ignore templates for 'react-native init'
.*/local-cli/templates/.*
; Ignore the Dangerfile
<PROJECT_ROOT>/bots/dangerfile.js
; Ignore "BUCK" generated dirs
<PROJECT_ROOT>/\.buckd/
; Ignore unexpected extra "@providesModule"
.*/node_modules/.*/node_modules/fbjs/.*
; Ignore duplicate module providers
; For RN Apps installed via npm, "Libraries" folder is inside
; "node_modules/react-native" but in the source repo it is in the root
.*/Libraries/react-native/React.js
; Ignore polyfills
.*/Libraries/polyfills/.*
; Ignore metro
.*/node_modules/metro/.*
; These should not be required directly
; require from fbjs/lib instead: require('fbjs/lib/invariant')
.*/node_modules/invariant/.*
.*/node_modules/warning/.*
[include]
[libs]
Libraries/react-native/react-native-interface.js
flow/
flow-github/
[options]
emoji=true
esproposal.optional_chaining=enable
esproposal.nullish_coalescing=enable
module.system=haste
module.system.haste.use_name_reducers=true
# keep the following in sync with server/haste/hasteImpl.js
# get basename
module.system.haste.name_reducers='^.*/\([a-zA-Z0-9$_.-]+\.js\(\.flow\)?\)$' -> '\1'
# strip .js or .js.flow suffix
module.system.haste.name_reducers='^\(.*\)\.js\(\.flow\)?$' -> '\1'
# strip .android suffix
module.system.haste.name_reducers='^\(.*\)\.android$' -> '\1'
module.system.haste.name_reducers='^\(.*\)\.ios$' -> '\1'
module.system.haste.name_reducers='^\(.*\)\.native$' -> '\1'
module.system.haste.paths.blacklist=.*/__tests__/.*
module.system.haste.paths.blacklist=.*/__mocks__/.*
module.system.haste.paths.whitelist=<PROJECT_ROOT>/Libraries/.*
module.system.haste.paths.whitelist=<PROJECT_ROOT>/RNTester/.*
module.system.haste.paths.whitelist=<PROJECT_ROOT>/IntegrationTests/.*
module.system.haste.paths.blacklist=<PROJECT_ROOT>/Libraries/Animated/src/polyfills/.*
munge_underscores=true
module.name_mapper='^[./a-zA-Z0-9$_-]+\.\(bmp\|gif\|jpg\|jpeg\|png\|psd\|svg\|webp\|m4v\|mov\|mp4\|mpeg\|mpg\|webm\|aac\|aiff\|caf\|m4a\|mp3\|wav\|html\|pdf\)$' -> 'RelativeImageStub'
suppress_type=$FlowIssue
suppress_type=$FlowFixMe
suppress_type=$FlowFixMeProps
suppress_type=$FlowFixMeState
suppress_comment=\\(.\\|\n\\)*\\$FlowFixMe\\($\\|[^(]\\|(\\(<VERSION>\\)? *\\(site=[a-z,_]*[react_native\\(_android\\)?_oss|react_native\\(_android\\)?_fb][a-z,_]*\\)?)\\)
suppress_comment=\\(.\\|\n\\)*\\$FlowIssue\\((\\(<VERSION>\\)? *\\(site=[a-z,_]*[react_native\\(_android\\)?_oss|react_native\\(_android\\)?_fb][a-z,_]*\\)?)\\)?:? #[0-9]+
suppress_comment=\\(.\\|\n\\)*\\$FlowFixedInNextDeploy
suppress_comment=\\(.\\|\n\\)*\\$FlowExpectedError
[lints]
all=warn
unnecessary-optional-chain=off
# There is an ESLint rule for this
unclear-type=off
sketchy-null=off
sketchy-null-number=warn
sketchy-null-mixed=warn
# This is noisy for now. We *do* still want to warn on importing types
# from untyped files, which is covered by untyped-type-import
untyped-import=off
[strict]
deprecated-type
nonstrict-import
sketchy-null
unclear-type
unsafe-getters-setters
untyped-import
untyped-type-import
[version]
^0.78.0

33
Folly/.gitignore поставляемый
Просмотреть файл

@ -1,33 +0,0 @@
*.o
*.lo
*.la
.dirstamp
Makefile
Makefile.in
.libs
.deps
stamp-h1
folly-config.h
_configs.sed
aclocal.m4
autom4te.cache
build-aux
libtool
folly/test/gtest
folly/folly-config.h
folly/**/test/*_benchmark
folly/**/test/*.log
folly/**/test/*_test
folly/**/test/*_test_using_jemalloc
folly/**/test/*.trs
folly/config.*
folly/configure
folly/logging/example/logging_example
folly/libfolly.pc
folly/m4/libtool.m4
folly/m4/ltoptions.m4
folly/m4/ltsugar.m4
folly/m4/ltversion.m4
folly/m4/lt~obsolete.m4
folly/generate_fingerprint_tables
folly/FingerprintTables.cpp

Просмотреть файл

@ -1,50 +0,0 @@
# Facebook projects that use `fbcode_builder` for continuous integration
# share this Travis configuration to run builds via Docker.
sudo: required
# Docker disables IPv6 in containers by default. Enable it for unit tests that need [::1].
before_script:
# `daemon.json` is normally missing, but let's log it in case that changes.
- sudo touch /etc/docker/daemon.json
- sudo cat /etc/docker/daemon.json
- sudo service docker stop
# This needs YAML quoting because of the curly braces.
- 'echo ''{"ipv6": true, "fixed-cidr-v6": "2001:db8:1::/64"}'' | sudo tee /etc/docker/daemon.json'
- sudo service docker start
# Fail early if docker failed on start -- add `- sudo dockerd` to debug.
- sudo docker info
# Paranoia log: what if our config got overwritten?
- sudo cat /etc/docker/daemon.json
env:
global:
- travis_cache_dir=$HOME/travis_ccache
# Travis times out after 50 minutes. Very generously leave 10 minutes
# for setup (e.g. cache download, compression, and upload), so we never
# fail to cache the progress we made.
- docker_build_timeout=40m
cache:
# Our build caches can be 200-300MB, so increase the timeout to 7 minutes
# to make sure we never fail to cache the progress we made.
timeout: 420
directories:
- $HOME/travis_ccache # see docker_build_with_ccache.sh
# Ugh, `services:` must be in the matrix, or we get `docker: command not found`
# https://github.com/travis-ci/travis-ci/issues/5142
matrix:
include:
- env: ['os_image=ubuntu:16.04', gcc_version=5]
services: [docker]
script:
# Travis seems to get confused when `matrix:` is used with `language:`
- sudo apt-get install python2.7
# We don't want to write the script inline because of Travis kludginess --
# it looks like it escapes " and \ in scripts when using `matrix:`.
- ./build/fbcode_builder/travis_docker_build.sh
notifications:
webhooks: https://code.facebook.com/travis/webhook/

Просмотреть файл

@ -1,22 +0,0 @@
# Finds libdouble-conversion.
#
# This module defines:
# DOUBLE_CONVERSION_INCLUDE_DIR
# DOUBLE_CONVERSION_LIBRARY
#
find_path(DOUBLE_CONVERSION_INCLUDE_DIR double-conversion/double-conversion.h)
find_library(DOUBLE_CONVERSION_LIBRARY NAMES double-conversion)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
DOUBLE_CONVERSION DEFAULT_MSG
DOUBLE_CONVERSION_LIBRARY DOUBLE_CONVERSION_INCLUDE_DIR)
if (NOT DOUBLE_CONVERSION_FOUND)
message(STATUS "Using third-party bundled double-conversion")
else()
message(STATUS "Found double-conversion: ${DOUBLE_CONVERSION_LIBRARY}")
endif (NOT DOUBLE_CONVERSION_FOUND)
mark_as_advanced(DOUBLE_CONVERSION_INCLUDE_DIR DOUBLE_CONVERSION_LIBRARY)

Просмотреть файл

@ -1,27 +0,0 @@
#
# Find libgflags
#
# LIBGFLAGS_INCLUDE_DIR - where to find gflags/gflags.h, etc.
# LIBGFLAGS_LIBRARY - List of libraries when using libgflags.
# LIBGFLAGS_FOUND - True if libgflags found.
IF (LIBGFLAGS_INCLUDE_DIR)
# Already in cache, be silent
SET(LIBGFLAGS_FIND_QUIETLY TRUE)
ENDIF ()
FIND_PATH(LIBGFLAGS_INCLUDE_DIR gflags/gflags.h)
FIND_LIBRARY(LIBGFLAGS_LIBRARY_DEBUG NAMES gflagsd gflags_staticd)
FIND_LIBRARY(LIBGFLAGS_LIBRARY_RELEASE NAMES gflags gflags_static)
INCLUDE(SelectLibraryConfigurations)
SELECT_LIBRARY_CONFIGURATIONS(LIBGFLAGS)
# handle the QUIETLY and REQUIRED arguments and set LIBGFLAGS_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(LIBGFLAGS DEFAULT_MSG LIBGFLAGS_LIBRARY LIBGFLAGS_INCLUDE_DIR)
MARK_AS_ADVANCED(LIBGFLAGS_LIBRARY LIBGFLAGS_INCLUDE_DIR)

Просмотреть файл

@ -1,23 +0,0 @@
#
# Find libglog
#
# LIBGLOG_INCLUDE_DIR - where to find glog/logging.h, etc.
# LIBGLOG_LIBRARY - List of libraries when using libglog.
# LIBGLOG_FOUND - True if libglog found.
IF (LIBGLOG_INCLUDE_DIR)
# Already in cache, be silent
SET(LIBGLOG_FIND_QUIETLY TRUE)
ENDIF ()
FIND_PATH(LIBGLOG_INCLUDE_DIR glog/logging.h)
FIND_LIBRARY(LIBGLOG_LIBRARY glog)
# handle the QUIETLY and REQUIRED arguments and set LIBGLOG_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(LIBGLOG DEFAULT_MSG LIBGLOG_LIBRARY LIBGLOG_INCLUDE_DIR)
MARK_AS_ADVANCED(LIBGLOG_LIBRARY LIBGLOG_INCLUDE_DIR)

Просмотреть файл

@ -1,65 +0,0 @@
#
# Find libgmock
#
# LIBGMOCK_DEFINES - List of defines when using libgmock.
# LIBGMOCK_INCLUDE_DIR - where to find gmock/gmock.h, etc.
# LIBGMOCK_LIBRARIES - List of libraries when using libgmock.
# LIBGMOCK_FOUND - True if libgmock found.
IF (LIBGMOCK_INCLUDE_DIR)
# Already in cache, be silent
SET(LIBGMOCK_FIND_QUIETLY TRUE)
ENDIF ()
FIND_PATH(LIBGMOCK_INCLUDE_DIR gmock/gmock.h)
FIND_LIBRARY(LIBGMOCK_MAIN_LIBRARY_DEBUG NAMES gmock_maind)
FIND_LIBRARY(LIBGMOCK_MAIN_LIBRARY_RELEASE NAMES gmock_main)
FIND_LIBRARY(LIBGMOCK_LIBRARY_DEBUG NAMES gmockd)
FIND_LIBRARY(LIBGMOCK_LIBRARY_RELEASE NAMES gmock)
FIND_LIBRARY(LIBGTEST_LIBRARY_DEBUG NAMES gtestd)
FIND_LIBRARY(LIBGTEST_LIBRARY_RELEASE NAMES gtest)
find_package(Threads REQUIRED)
INCLUDE(SelectLibraryConfigurations)
SELECT_LIBRARY_CONFIGURATIONS(LIBGMOCK_MAIN)
SELECT_LIBRARY_CONFIGURATIONS(LIBGMOCK)
SELECT_LIBRARY_CONFIGURATIONS(LIBGTEST)
set(LIBGMOCK_LIBRARIES
${LIBGMOCK_MAIN_LIBRARY}
${LIBGMOCK_LIBRARY}
${LIBGTEST_LIBRARY}
Threads::Threads
)
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
# The GTEST_LINKED_AS_SHARED_LIBRARY macro must be set properly on Windows.
#
# There isn't currently an easy way to determine if a library was compiled as
# a shared library on Windows, so just assume we've been built against a
# shared build of gmock for now.
SET(LIBGMOCK_DEFINES "GTEST_LINKED_AS_SHARED_LIBRARY=1" CACHE STRING "")
endif()
# handle the QUIETLY and REQUIRED arguments and set LIBGMOCK_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
GMock
DEFAULT_MSG
LIBGMOCK_MAIN_LIBRARY
LIBGMOCK_LIBRARY
LIBGTEST_LIBRARY
LIBGMOCK_LIBRARIES
LIBGMOCK_INCLUDE_DIR
)
MARK_AS_ADVANCED(
LIBGMOCK_DEFINES
LIBGMOCK_MAIN_LIBRARY
LIBGMOCK_LIBRARY
LIBGTEST_LIBRARY
LIBGMOCK_LIBRARIES
LIBGMOCK_INCLUDE_DIR
)

Просмотреть файл

@ -1,27 +0,0 @@
# Finds liblz4.
#
# This module defines:
# LZ4_FOUND
# LZ4_INCLUDE_DIR
# LZ4_LIBRARY
#
find_path(LZ4_INCLUDE_DIR NAMES lz4.h)
find_library(LZ4_LIBRARY_DEBUG NAMES lz4d)
find_library(LZ4_LIBRARY_RELEASE NAMES lz4)
include(SelectLibraryConfigurations)
SELECT_LIBRARY_CONFIGURATIONS(LZ4)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
LZ4 DEFAULT_MSG
LZ4_LIBRARY LZ4_INCLUDE_DIR
)
if (LZ4_FOUND)
message(STATUS "Found LZ4: ${LZ4_LIBRARY}")
endif()
mark_as_advanced(LZ4_INCLUDE_DIR LZ4_LIBRARY)

Просмотреть файл

@ -1,15 +0,0 @@
find_path(LIBAIO_INCLUDE_DIR NAMES libaio.h)
mark_as_advanced(LIBAIO_INCLUDE_DIR)
find_library(LIBAIO_LIBRARY NAMES aio)
mark_as_advanced(LIBAIO_LIBRARY)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
LIBAIO
REQUIRED_VARS LIBAIO_LIBRARY LIBAIO_INCLUDE_DIR)
if(LIBAIO_FOUND)
set(LIBAIO_LIBRARIES ${LIBAIO_LIBRARY})
set(LIBAIO_INCLUDE_DIRS ${LIBAIO_INCLUDE_DIR})
endif()

Просмотреть файл

@ -1,18 +0,0 @@
# dwarf.h is typically installed in a libdwarf/ subdirectory on Debian-style
# Linux distributions. It is not installed in a libdwarf/ subdirectory on Mac
# systems when installed with Homebrew. Search for it in both locations.
find_path(LIBDWARF_INCLUDE_DIR NAMES dwarf.h PATH_SUFFIXES libdwarf)
mark_as_advanced(LIBDWARF_INCLUDE_DIR)
find_library(LIBDWARF_LIBRARY NAMES dwarf)
mark_as_advanced(LIBDWARF_LIBRARY)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
LIBDWARF
REQUIRED_VARS LIBDWARF_LIBRARY LIBDWARF_INCLUDE_DIR)
if(LIBDWARF_FOUND)
set(LIBDWARF_LIBRARIES ${LIBDWARF_LIBRARY})
set(LIBDWARF_INCLUDE_DIRS ${LIBDWARF_INCLUDE_DIR})
endif()

Просмотреть файл

@ -1,37 +0,0 @@
# - Find LibEvent (a cross event library)
# This module defines
# LIBEVENT_INCLUDE_DIR, where to find LibEvent headers
# LIBEVENT_LIB, LibEvent libraries
# LibEvent_FOUND, If false, do not try to use libevent
set(LibEvent_EXTRA_PREFIXES /usr/local /opt/local "$ENV{HOME}")
foreach(prefix ${LibEvent_EXTRA_PREFIXES})
list(APPEND LibEvent_INCLUDE_PATHS "${prefix}/include")
list(APPEND LibEvent_LIB_PATHS "${prefix}/lib")
endforeach()
find_path(LIBEVENT_INCLUDE_DIR event.h PATHS ${LibEvent_INCLUDE_PATHS})
find_library(LIBEVENT_LIB NAMES event PATHS ${LibEvent_LIB_PATHS})
if (LIBEVENT_LIB AND LIBEVENT_INCLUDE_DIR)
set(LibEvent_FOUND TRUE)
set(LIBEVENT_LIB ${LIBEVENT_LIB})
else ()
set(LibEvent_FOUND FALSE)
endif ()
if (LibEvent_FOUND)
if (NOT LibEvent_FIND_QUIETLY)
message(STATUS "Found libevent: ${LIBEVENT_LIB}")
endif ()
else ()
if (LibEvent_FIND_REQUIRED)
message(FATAL_ERROR "Could NOT find libevent.")
endif ()
message(STATUS "libevent NOT found.")
endif ()
mark_as_advanced(
LIBEVENT_LIB
LIBEVENT_INCLUDE_DIR
)

Просмотреть файл

@ -1,15 +0,0 @@
find_path(LIBIBERTY_INCLUDE_DIR NAMES libiberty.h PATH_SUFFIXES libiberty)
mark_as_advanced(LIBIBERTY_INCLUDE_DIR)
find_library(LIBIBERTY_LIBRARY NAMES iberty)
mark_as_advanced(LIBIBERTY_LIBRARY)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
LIBIBERTY
REQUIRED_VARS LIBIBERTY_LIBRARY LIBIBERTY_INCLUDE_DIR)
if(LIBIBERTY_FOUND)
set(LIBIBERTY_LIBRARIES ${LIBIBERTY_LIBRARY})
set(LIBIBERTY_INCLUDE_DIRS ${LIBIBERTY_INCLUDE_DIR})
endif()

Просмотреть файл

@ -1,22 +0,0 @@
# Find the Snappy libraries
#
# This module defines:
# SNAPPY_FOUND
# SNAPPY_INCLUDE_DIR
# SNAPPY_LIBRARY
find_path(SNAPPY_INCLUDE_DIR NAMES snappy.h)
find_library(SNAPPY_LIBRARY_DEBUG NAMES snappyd)
find_library(SNAPPY_LIBRARY_RELEASE NAMES snappy)
include(SelectLibraryConfigurations)
SELECT_LIBRARY_CONFIGURATIONS(SNAPPY)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
SNAPPY DEFAULT_MSG
SNAPPY_LIBRARY SNAPPY_INCLUDE_DIR
)
mark_as_advanced(SNAPPY_INCLUDE_DIR SNAPPY_LIBRARY)

Просмотреть файл

@ -1,27 +0,0 @@
#
# - Try to find Facebook zstd library
# This will define
# ZSTD_FOUND
# ZSTD_INCLUDE_DIR
# ZSTD_LIBRARY
#
find_path(ZSTD_INCLUDE_DIR NAMES zstd.h)
find_library(ZSTD_LIBRARY_DEBUG NAMES zstdd)
find_library(ZSTD_LIBRARY_RELEASE NAMES zstd)
include(SelectLibraryConfigurations)
SELECT_LIBRARY_CONFIGURATIONS(ZSTD)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(
ZSTD DEFAULT_MSG
ZSTD_LIBRARY ZSTD_INCLUDE_DIR
)
if (ZSTD_FOUND)
message(STATUS "Found Zstd: ${ZSTD_LIBRARY}")
endif()
mark_as_advanced(ZSTD_INCLUDE_DIR ZSTD_LIBRARY)

Просмотреть файл

@ -1,298 +0,0 @@
# Some additional configuration options.
option(MSVC_ENABLE_ALL_WARNINGS "If enabled, pass /Wall to the compiler." ON)
option(MSVC_ENABLE_DEBUG_INLINING "If enabled, enable inlining in the debug configuration. This allows /Zc:inline to be far more effective." OFF)
option(MSVC_ENABLE_FAST_LINK "If enabled, pass /DEBUG:FASTLINK to the linker. This makes linking faster, but the gtest integration for Visual Studio can't currently handle the .pdbs generated." OFF)
option(MSVC_ENABLE_LEAN_AND_MEAN_WINDOWS "If enabled, define WIN32_LEAN_AND_MEAN to include a smaller subset of Windows.h" ON)
option(MSVC_ENABLE_LTCG "If enabled, use Link Time Code Generation for Release builds." OFF)
option(MSVC_ENABLE_PARALLEL_BUILD "If enabled, build multiple source files in parallel." ON)
option(MSVC_ENABLE_STATIC_ANALYSIS "If enabled, do more complex static analysis and generate warnings appropriately." OFF)
option(MSVC_USE_STATIC_RUNTIME "If enabled, build against the static, rather than the dynamic, runtime." OFF)
option(MSVC_SUPPRESS_BOOST_CONFIG_OUTDATED "If enabled, suppress Boost's warnings about the config being out of date." ON)
# Alas, option() doesn't support string values.
set(MSVC_FAVORED_ARCHITECTURE "blend" CACHE STRING "One of 'blend', 'AMD64', 'INTEL64', or 'ATOM'. This tells the compiler to generate code optimized to run best on the specified architecture.")
# Add a pretty drop-down selector for these values when using the GUI.
set_property(
CACHE MSVC_FAVORED_ARCHITECTURE
PROPERTY STRINGS
blend
AMD64
ATOM
INTEL64
)
# Validate, and then add the favored architecture.
if (NOT MSVC_FAVORED_ARCHITECTURE STREQUAL "blend" AND NOT MSVC_FAVORED_ARCHITECTURE STREQUAL "AMD64" AND NOT MSVC_FAVORED_ARCHITECTURE STREQUAL "INTEL64" AND NOT MSVC_FAVORED_ARCHITECTURE STREQUAL "ATOM")
message(FATAL_ERROR "MSVC_FAVORED_ARCHITECTURE must be set to one of exactly, 'blend', 'AMD64', 'INTEL64', or 'ATOM'! Got '${MSVC_FAVORED_ARCHITECTURE}' instead!")
endif()
set(MSVC_LANGUAGE_VERSION "c++latest" CACHE STRING "One of 'c++14', 'c++17', or 'c++latest'. This determines which version of C++ to compile as.")
set_property(
CACHE MSVC_LANGUAGE_VERSION
PROPERTY STRINGS
"c++14"
"c++17"
"c++latest"
)
############################################################
# We need to adjust a couple of the default option sets.
############################################################
# If the static runtime is requested, we have to
# overwrite some of CMake's defaults.
if (MSVC_USE_STATIC_RUNTIME)
foreach(flag_var
CMAKE_C_FLAGS CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_MINSIZEREL CMAKE_C_FLAGS_RELWITHDEBINFO
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO)
if (${flag_var} MATCHES "/MD")
string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
endif()
endforeach()
endif()
# The Ninja generator doesn't de-dup the exception mode flag, so remove the
# default flag so that MSVC doesn't warn about it on every single file.
if ("${CMAKE_GENERATOR}" STREQUAL "Ninja")
foreach(flag_var
CMAKE_C_FLAGS CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_MINSIZEREL CMAKE_C_FLAGS_RELWITHDEBINFO
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO)
if (${flag_var} MATCHES "/EHsc")
string(REGEX REPLACE "/EHsc" "" ${flag_var} "${${flag_var}}")
endif()
endforeach()
endif()
# In order for /Zc:inline, which speeds up the build significantly, to work
# we need to remove the /Ob0 parameter that CMake adds by default, because that
# would normally disable all inlining.
foreach(flag_var CMAKE_C_FLAGS_DEBUG CMAKE_CXX_FLAGS_DEBUG)
if (${flag_var} MATCHES "/Ob0")
string(REGEX REPLACE "/Ob0" "" ${flag_var} "${${flag_var}}")
endif()
endforeach()
# Apply the option set for Folly to the specified target.
function(apply_folly_compile_options_to_target THETARGET)
# The general options passed:
target_compile_options(${THETARGET}
PUBLIC
/EHa # Enable both SEH and C++ Exceptions.
/GF # There are bugs with constexpr StringPiece when string pooling is disabled.
/Zc:referenceBinding # Disallow temporaries from binding to non-const lvalue references.
/Zc:rvalueCast # Enforce the standard rules for explicit type conversion.
/Zc:implicitNoexcept # Enable implicit noexcept specifications where required, such as destructors.
/Zc:strictStrings # Don't allow conversion from a string literal to mutable characters.
/Zc:threadSafeInit # Enable thread-safe function-local statics initialization.
/Zc:throwingNew # Assume operator new throws on failure.
/permissive- # Be mean, don't allow bad non-standard stuff (C++/CLI, __declspec, etc. are all left intact).
/std:${MSVC_LANGUAGE_VERSION} # Build in the requested version of C++
PRIVATE
/bigobj # Support objects with > 65k sections. Needed due to templates.
/favor:${MSVC_FAVORED_ARCHITECTURE} # Architecture to prefer when generating code.
/Zc:inline # Have the compiler eliminate unreferenced COMDAT functions and data before emitting the object file.
$<$<BOOL:${MSVC_ENABLE_ALL_WARNINGS}>:/Wall> # Enable all warnings if requested.
$<$<BOOL:${MSVC_ENABLE_PARALLEL_BUILD}>:/MP> # Enable multi-processor compilation if requested.
$<$<BOOL:${MSVC_ENABLE_STATIC_ANALYSIS}>:/analyze> # Enable static analysis if requested.
# Debug builds
$<$<CONFIG:DEBUG>:
/Gy- # Disable function level linking.
$<$<BOOL:${MSVC_ENABLE_DEBUG_INLINING}>:/Ob2> # Add /Ob2 if allowing inlining in debug mode.
>
# Non-debug builds
$<$<NOT:$<CONFIG:DEBUG>>:
/Gw # Optimize global data. (-fdata-sections)
/Gy # Enable function level linking. (-ffunction-sections)
/Qpar # Enable parallel code generation.
/Oi # Enable intrinsic functions.
/Ot # Favor fast code.
$<$<BOOL:${MSVC_ENABLE_LTCG}>:/GL> # Enable link time code generation.
>
)
target_compile_options(${THETARGET}
PUBLIC
/wd4191 # 'type cast' unsafe conversion of function pointers
/wd4291 # no matching operator delete found
/wd4309 # '=' truncation of constant value
/wd4310 # cast truncates constant value
/wd4366 # result of unary '&' operator may be unaligned
/wd4587 # behavior change; constructor no longer implicitly called
/wd4592 # symbol will be dynamically initialized (implementation limitation)
/wd4628 # digraphs not supported with -Ze
/wd4723 # potential divide by 0
/wd4724 # potential mod by 0
/wd4868 # compiler may not enforce left-to-right evaluation order
/wd4996 # user deprecated
# The warnings that are disabled:
/wd4068 # Unknown pragma.
/wd4091 # 'typedef' ignored on left of '' when no variable is declared.
/wd4146 # Unary minus applied to unsigned type, result still unsigned.
/wd4800 # Values being forced to bool, this happens many places, and is a "performance warning".
# NOTE: glog/logging.h:1116 change to `size_t pcount() const { return size_t(pptr() - pbase()); }`
# NOTE: gmock/gmock-spec-builders.h:1177 change to `*static_cast<const Action<F>*>(untyped_actions_[size_t(count - 1)]) :`
# NOTE: gmock/gmock-spec-builders.h:1749 change to `const size_t count = untyped_expectations_.size();`
# NOTE: gmock/gmock-spec-builders.h:1754 change to `for (size_t i = 0; i < count; i++) {`
# NOTE: gtest/gtest-printers.h:173 change to `const internal::BiggestInt kBigInt = internal::BiggestInt(value);`
# NOTE: gtest/internal/gtest-internal.h:890 add `GTEST_DISABLE_MSC_WARNINGS_PUSH_(4365)`
# NOTE: gtest/internal/gtest-internal.h:894 ass `GTEST_DISABLE_MSC_WARNINGS_POP_()`
# NOTE: boost/crc.hpp:578 change to `{ return static_cast<unsigned char>(x ^ rem); }`
# NOTE: boost/regex/v4/match_results.hpp:126 change to `return m_subs[size_type(sub)].length();`
# NOTE: boost/regex/v4/match_results.hpp:226 change to `return m_subs[size_type(sub)];`
# NOTE: boost/date_time/adjust_functors.hpp:67 change to `origDayOfMonth_ = short(ymd.day);`
# NOTE: boost/date_time/adjust_functors.hpp:75 change to `wrap_int2 wi(short(ymd.month));`
# NOTE: boost/date_time/adjust_functors.hpp:82 change to `day_type resultingEndOfMonthDay(cal_type::end_of_month_day(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int())));`
# NOTE: boost/date_time/adjust_functors.hpp:85 change to `return date_type(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int()), resultingEndOfMonthDay) - d;`
# NOTE: boost/date_time/adjust_functors.hpp:87 change to `day_type dayOfMonth = static_cast<unsigned short>(origDayOfMonth_);`
# NOTE: boost/date_time/adjust_functors.hpp:91 change to `return date_type(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int()), dayOfMonth) - d;`
# NOTE: boost/date_time/adjust_functors.hpp:98 change to `origDayOfMonth_ = short(ymd.day);`
# NOTE: boost/date_time/adjust_functors.hpp:106 change to `wrap_int2 wi(short(ymd.month));`
# NOTE: boost/date_time/adjust_functors.hpp:111 change to `day_type resultingEndOfMonthDay(cal_type::end_of_month_day(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int())));`
# NOTE: boost/date_time/adjust_functors.hpp:114 change to `return date_type(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int()), resultingEndOfMonthDay) - d;`
# NOTE: boost/date_time/adjust_functors.hpp:116 change to `day_type dayOfMonth = static_cast<unsigned short>(origDayOfMonth_);`
# NOTE: boost/date_time/adjust_functors.hpp:120 change to `return date_type(static_cast<unsigned short>(year), static_cast<unsigned short>(wi.as_int()), dayOfMonth) - d;`
# NOTE: boost/date_time/gregorian_calendar.ipp:81 change to `unsigned long d = static_cast<unsigned long>(ymd.day + ((153*m + 2)/5) + 365*y + (y/4) - (y/100) + (y/400) - 32045);`
# NOTE: boost/date_time/gregorian/greg_date.hpp:122 change to `unsigned short eom_day = gregorian_calendar::end_of_month_day(ymd.year, ymd.month);`
# NOTE: boost/thread/future.hpp:1050 change to `locks[std::ptrdiff_t(i)]=BOOST_THREAD_MAKE_RV_REF(boost::unique_lock<boost::mutex>(futures[i].future_->mutex));`
# NOTE: boost/thread/future.hpp:1063 change to `locks[std::ptrdiff_t(i)].unlock();`
# NOTE: boost/thread/win32/basic_recursive_mutex.hpp:47 change to `long const current_thread_id=long(win32::GetCurrentThreadId());`
# NOTE: boost/thread/win32/basic_recursive_mutex.hpp:53 change to `long const current_thread_id=long(win32::GetCurrentThreadId());`
# NOTE: boost/thread/win32/basic_recursive_mutex.hpp:64 change to `long const current_thread_id=long(win32::GetCurrentThreadId());`
# NOTE: boost/thread/win32/basic_recursive_mutex.hpp:78 change to `long const current_thread_id=long(win32::GetCurrentThreadId());`
# NOTE: boost/thread/win32/basic_recursive_mutex.hpp:84 change to `long const current_thread_id=long(win32::GetCurrentThreadId());`
# NOTE: boost/thread/win32/condition_variable.hpp:79 change to `detail::win32::ReleaseSemaphore(semaphore,long(count_to_release),0);`
# NOTE: boost/thread/win32/condition_variable.hpp:84 change to `release(unsigned(detail::interlocked_read_acquire(&waiters)));`
# NOTE: boost/algorithm/string/detail/classification.hpp:85 change to `std::size_t Size=std::size_t(::boost::distance(Range));`
/wd4018 # Signed/unsigned mismatch.
/wd4365 # Signed/unsigned mismatch.
/wd4388 # Signed/unsigned mismatch on relative comparison operator.
/wd4389 # Signed/unsigned mismatch on equality comparison operator.
# TODO:
/wd4100 # Unreferenced formal parameter.
/wd4459 # Declaration of parameter hides global declaration.
/wd4505 # Unreferenced local function has been removed.
/wd4701 # Potentially uninitialized local variable used.
/wd4702 # Unreachable code.
# These warnings are disabled because we've
# enabled all warnings. If all warnings are
# not enabled, we still need to disable them
# for consuming libs.
/wd4061 # Enum value not handled by a case in a switch on an enum. This isn't very helpful because it is produced even if a default statement is present.
/wd4127 # Conditional expression is constant.
/wd4200 # Non-standard extension, zero sized array.
/wd4201 # Non-standard extension used: nameless struct/union.
/wd4296 # '<' Expression is always false.
/wd4316 # Object allocated on the heap may not be aligned to 128.
/wd4324 # Structure was padded due to alignment specifier.
/wd4355 # 'this' used in base member initializer list.
/wd4371 # Layout of class may have changed due to fixes in packing.
/wd4435 # Object layout under /vd2 will change due to virtual base.
/wd4514 # Unreferenced inline function has been removed. (caused by /Zc:inline)
/wd4548 # Expression before comma has no effect. I wouldn't disable this normally, but malloc.h triggers this warning.
/wd4574 # ifdef'd macro was defined to 0.
/wd4582 # Constructor is not implicitly called.
/wd4583 # Destructor is not implicitly called.
/wd4619 # Invalid warning number used in #pragma warning.
/wd4623 # Default constructor was implicitly defined as deleted.
/wd4625 # Copy constructor was implicitly defined as deleted.
/wd4626 # Assignment operator was implicitly defined as deleted.
/wd4643 # Forward declaring standard library types is not permitted.
/wd4647 # Behavior change in __is_pod.
/wd4668 # Macro was not defined, replacing with 0.
/wd4706 # Assignment within conditional expression.
/wd4710 # Function was not inlined.
/wd4711 # Function was selected for automated inlining.
/wd4714 # Function marked as __forceinline not inlined.
/wd4820 # Padding added after data member.
/wd5026 # Move constructor was implicitly defined as deleted.
/wd5027 # Move assignment operator was implicitly defined as deleted.
/wd5031 # #pragma warning(pop): likely mismatch, popping warning state pushed in different file. This is needed because of how boost does things.
/wd5045 # Compiler will insert Spectre mitigation for memory load if /Qspectre switch is specified.
# Warnings to treat as errors:
/we4099 # Mixed use of struct and class on same type names.
/we4129 # Unknown escape sequence. This is usually caused by incorrect escaping.
/we4566 # Character cannot be represented in current charset. This is remidied by prefixing string with "u8".
PRIVATE
# Warnings disabled for /analyze
$<$<BOOL:${MSVC_ENABLE_STATIC_ANALYSIS}>:
/wd6001 # Using uninitialized memory. This is disabled because it is wrong 99% of the time.
/wd6011 # Dereferencing potentially NULL pointer.
/wd6031 # Return value ignored.
/wd6235 # (<non-zero constant> || <expression>) is always a non-zero constant.
/wd6237 # (<zero> && <expression>) is always zero. <expression> is never evaluated and may have side effects.
/wd6239 # (<non-zero constant> && <expression>) always evaluates to the result of <expression>.
/wd6240 # (<expression> && <non-zero constant>) always evaluates to the result of <expression>.
/wd6246 # Local declaration hides declaration of same name in outer scope.
/wd6248 # Setting a SECURITY_DESCRIPTOR's DACL to NULL will result in an unprotected object. This is done by one of the boost headers.
/wd6255 # _alloca indicates failure by raising a stack overflow exception.
/wd6262 # Function uses more than x bytes of stack space.
/wd6271 # Extra parameter passed to format function. The analysis pass doesn't recognize %j or %z, even though the runtime does.
/wd6285 # (<non-zero constant> || <non-zero constant>) is always true.
/wd6297 # 32-bit value is shifted then cast to 64-bits. The places this occurs never use more than 32 bits.
/wd6308 # Realloc might return null pointer: assigning null pointer to '<name>', which is passed as an argument to 'realloc', will cause the original memory to leak.
/wd6326 # Potential comparison of a constant with another constant.
/wd6330 # Unsigned/signed mismatch when passed as a parameter.
/wd6340 # Mismatch on sign when passed as format string value.
/wd6387 # '<value>' could be '0': This does not adhere to the specification for a function.
/wd28182 # Dereferencing NULL pointer. '<value>' contains the same NULL value as '<expression>'.
/wd28251 # Inconsistent annotation for function. This is because we only annotate the declaration and not the definition.
/wd28278 # Function appears with no prototype in scope.
>
)
# And the extra defines:
target_compile_definitions(${THETARGET}
PUBLIC
_CRT_NONSTDC_NO_WARNINGS # Don't deprecate posix names of functions.
_CRT_SECURE_NO_WARNINGS # Don't deprecate the non _s versions of various standard library functions, because safety is for chumps.
_SCL_SECURE_NO_WARNINGS # Don't deprecate the non _s versions of various standard library functions, because safety is for chumps.
_ENABLE_EXTENDED_ALIGNED_STORAGE #A type with an extended alignment in VS 15.8 or later
_STL_EXTRA_DISABLED_WARNINGS=4774\ 4987
$<$<BOOL:${MSVC_ENABLE_CPP_LATEST}>:_HAS_AUTO_PTR_ETC=1> # We're building in C++ 17 or greater mode, but certain dependencies (Boost) still have dependencies on unary_function and binary_function, so we have to make sure not to remove them.
$<$<BOOL:${MSVC_ENABLE_LEAN_AND_MEAN_WINDOWS}>:WIN32_LEAN_AND_MEAN> # Don't include most of Windows.h
$<$<BOOL:${MSVC_SUPPRESS_BOOST_CONFIG_OUTDATED}>:BOOST_CONFIG_SUPPRESS_OUTDATED_MESSAGE> # MSVC moves faster than boost, so add a quick way to disable the messages.
)
# Ignore a warning about an object file not defining any symbols,
# these are known, and we don't care.
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY STATIC_LIBRARY_FLAGS " /ignore:4221")
# The options to pass to the linker:
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_DEBUG " /INCREMENTAL") # Do incremental linking.
if (NOT $<TARGET_PROPERTY:${THETARGET},TYPE> STREQUAL "STATIC_LIBRARY")
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_DEBUG " /OPT:NOREF") # No unreferenced data elimination.
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_DEBUG " /OPT:NOICF") # No Identical COMDAT folding.
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_RELEASE " /OPT:REF") # Remove unreferenced functions and data.
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_RELEASE " /OPT:ICF") # Identical COMDAT folding.
endif()
if (MSVC_ENABLE_FAST_LINK)
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_DEBUG " /DEBUG:FASTLINK") # Generate a partial PDB file that simply references the original object and library files.
endif()
# Add /GL to the compiler, and /LTCG to the linker
# if link time code generation is enabled.
if (MSVC_ENABLE_LTCG)
set_property(TARGET ${THETARGET} APPEND_STRING PROPERTY LINK_FLAGS_RELEASE " /LTCG")
endif()
endfunction()
list(APPEND FOLLY_LINK_LIBRARIES Iphlpapi.lib Ws2_32.lib)

Просмотреть файл

@ -1,30 +0,0 @@
set(CMAKE_CXX_FLAGS_COMMON "-g -Wall -Wextra")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_COMMON}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_COMMON} -O3")
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} -std=gnu++14")
function(apply_folly_compile_options_to_target THETARGET)
target_compile_definitions(${THETARGET}
PRIVATE
_REENTRANT
_GNU_SOURCE
"FOLLY_XLOG_STRIP_PREFIXES=\"${FOLLY_DIR_PREFIXES}\""
)
target_compile_options(${THETARGET}
PRIVATE
-g
-std=gnu++14
-finput-charset=UTF-8
-fsigned-char
-Werror
-Wall
-Wno-deprecated
-Wno-deprecated-declarations
-Wno-sign-compare
-Wno-unused
-Wunused-label
-Wunused-result
-Wnon-virtual-dtor
${FOLLY_CXX_FLAGS}
)
endfunction()

Просмотреть файл

@ -1,243 +0,0 @@
include(CheckCXXSourceCompiles)
include(CheckCXXSourceRuns)
include(CheckFunctionExists)
include(CheckIncludeFileCXX)
include(CheckSymbolExists)
include(CheckTypeSize)
include(CheckCXXCompilerFlag)
CHECK_INCLUDE_FILE_CXX(jemalloc/jemalloc.h FOLLY_USE_JEMALLOC)
if(NOT CMAKE_SYSTEM_NAME STREQUAL "Windows")
# clang only rejects unknown warning flags if -Werror=unknown-warning-option
# is also specified.
CHECK_CXX_COMPILER_FLAG(
-Werror=unknown-warning-option
COMPILER_HAS_UNKNOWN_WARNING_OPTION)
if (COMPILER_HAS_UNKNOWN_WARNING_OPTION)
set(CMAKE_REQUIRED_FLAGS
"${CMAKE_REQUIRED_FLAGS} -Werror=unknown-warning-option")
endif()
CHECK_CXX_COMPILER_FLAG(-Wshadow-local COMPILER_HAS_W_SHADOW_LOCAL)
CHECK_CXX_COMPILER_FLAG(
-Wshadow-compatible-local
COMPILER_HAS_W_SHADOW_COMPATIBLE_LOCAL)
if (COMPILER_HAS_W_SHADOW_LOCAL AND COMPILER_HAS_W_SHADOW_COMPATIBLE_LOCAL)
set(FOLLY_HAVE_SHADOW_LOCAL_WARNINGS ON)
list(APPEND FOLLY_CXX_FLAGS -Wshadow-compatible-local)
endif()
CHECK_CXX_COMPILER_FLAG(-Wnoexcept-type COMPILER_HAS_W_NOEXCEPT_TYPE)
if (COMPILER_HAS_W_NOEXCEPT_TYPE)
list(APPEND FOLLY_CXX_FLAGS -Wno-noexcept-type)
endif()
CHECK_CXX_COMPILER_FLAG(
-Wnullability-completeness
COMPILER_HAS_W_NULLABILITY_COMPLETENESS)
if (COMPILER_HAS_W_NULLABILITY_COMPLETENESS)
list(APPEND FOLLY_CXX_FLAGS -Wno-nullability-completeness)
endif()
CHECK_CXX_COMPILER_FLAG(
-Winconsistent-missing-override
COMPILER_HAS_W_INCONSISTENT_MISSING_OVERRIDE)
if (COMPILER_HAS_W_INCONSISTENT_MISSING_OVERRIDE)
list(APPEND FOLLY_CXX_FLAGS -Wno-inconsistent-missing-override)
endif()
CHECK_CXX_COMPILER_FLAG(-faligned-new COMPILER_HAS_F_ALIGNED_NEW)
if (COMPILER_HAS_F_ALIGNED_NEW)
list(APPEND FOLLY_CXX_FLAGS -faligned-new)
endif()
CHECK_CXX_COMPILER_FLAG(-fopenmp COMPILER_HAS_F_OPENMP)
if (COMPILER_HAS_F_OPENMP)
list(APPEND FOLLY_CXX_FLAGS -fopenmp)
endif()
endif()
set(FOLLY_ORIGINAL_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
string(REGEX REPLACE
"-std=(c|gnu)\\+\\+.."
""
CMAKE_REQUIRED_FLAGS
"${CMAKE_REQUIRED_FLAGS}")
check_symbol_exists(pthread_atfork pthread.h FOLLY_HAVE_PTHREAD_ATFORK)
# Unfortunately check_symbol_exists() does not work for memrchr():
# it fails complaining that there are multiple overloaded versions of memrchr()
check_function_exists(memrchr FOLLY_HAVE_MEMRCHR)
check_symbol_exists(preadv sys/uio.h FOLLY_HAVE_PREADV)
check_symbol_exists(pwritev sys/uio.h FOLLY_HAVE_PWRITEV)
check_symbol_exists(clock_gettime time.h FOLLY_HAVE_CLOCK_GETTIME)
check_function_exists(malloc_usable_size FOLLY_HAVE_MALLOC_USABLE_SIZE)
set(CMAKE_REQUIRED_FLAGS "${FOLLY_ORIGINAL_CMAKE_REQUIRED_FLAGS}")
check_cxx_source_compiles("
#pragma GCC diagnostic error \"-Wattributes\"
extern \"C\" void (*test_ifunc(void))() { return 0; }
void func() __attribute__((ifunc(\"test_ifunc\")));
int main() { return 0; }"
FOLLY_HAVE_IFUNC
)
check_cxx_source_compiles("
#include <type_traits>
const bool val = std::is_trivially_copyable<bool>::value;
int main() { return 0; }"
FOLLY_HAVE_STD__IS_TRIVIALLY_COPYABLE
)
check_cxx_source_runs("
int main(int, char**) {
char buf[64] = {0};
unsigned long *ptr = (unsigned long *)(buf + 1);
*ptr = 0xdeadbeef;
return (*ptr & 0xff) == 0xef ? 0 : 1;
}"
FOLLY_HAVE_UNALIGNED_ACCESS
)
check_cxx_source_compiles("
int main(int argc, char** argv) {
unsigned size = argc;
char data[size];
return 0;
}"
FOLLY_HAVE_VLA
)
check_cxx_source_compiles("
extern \"C\" void configure_link_extern_weak_test() __attribute__((weak));
int main(int argc, char** argv) {
return configure_link_extern_weak_test == nullptr;
}"
FOLLY_HAVE_WEAK_SYMBOLS
)
check_cxx_source_runs("
#include <dlfcn.h>
int main() {
void *h = dlopen(\"linux-vdso.so.1\", RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
if (h == nullptr) {
return -1;
}
dlclose(h);
return 0;
}"
FOLLY_HAVE_LINUX_VDSO
)
check_type_size(__int128 INT128_SIZE LANGUAGE CXX)
if (NOT INT128_SIZE STREQUAL "")
set(FOLLY_HAVE_INT128_T ON)
check_cxx_source_compiles("
#include <functional>
#include <type_traits>
#include <utility>
static_assert(
::std::is_same<::std::make_signed<unsigned __int128>::type,
__int128>::value,
\"signed form of 'unsigned __uint128' must be '__int128'.\");
static_assert(
sizeof(::std::hash<__int128>{}(0)) > 0, \
\"std::hash<__int128> is disabled.\");
int main() { return 0; }"
HAVE_INT128_TRAITS
)
if (HAVE_INT128_TRAITS)
set(FOLLY_SUPPLY_MISSING_INT128_TRAITS OFF)
else()
set(FOLLY_SUPPLY_MISSING_INT128_TRAITS ON)
endif()
endif()
check_cxx_source_runs("
#include <cstddef>
#include <cwchar>
int main(int argc, char** argv) {
return wcstol(L\"01\", nullptr, 10) == 1 ? 0 : 1;
}"
FOLLY_HAVE_WCHAR_SUPPORT
)
check_cxx_source_compiles("
#include <ext/random>
int main(int argc, char** argv) {
__gnu_cxx::sfmt19937 rng;
return 0;
}"
FOLLY_HAVE_EXTRANDOM_SFMT19937
)
check_cxx_source_compiles("
#include <type_traits>
#if !_LIBCPP_VERSION
#error No libc++
#endif
int main() { return 0; }"
FOLLY_USE_LIBCPP
)
check_cxx_source_compiles("
#include <type_traits>
#if !__GLIBCXX__
#error No libstdc++
#endif
int main() { return 0; }"
FOLLY_USE_LIBSTDCPP
)
check_cxx_source_runs("
#include <string.h>
#include <errno.h>
int main(int argc, char** argv) {
char buf[1024];
buf[0] = 0;
int ret = strerror_r(ENOMEM, buf, sizeof(buf));
return ret;
}"
FOLLY_HAVE_XSI_STRERROR_R
)
check_cxx_source_runs("
#include <stdarg.h>
#include <stdio.h>
int call_vsnprintf(const char* fmt, ...) {
char buf[256];
va_list ap;
va_start(ap, fmt);
int result = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
return result;
}
int main(int argc, char** argv) {
return call_vsnprintf(\"%\", 1) < 0 ? 0 : 1;
}"
HAVE_VSNPRINTF_ERRORS
)
if (FOLLY_HAVE_LIBGFLAGS)
# Older releases of gflags used the namespace "gflags"; newer releases
# use "google" but also make symbols available in the deprecated "gflags"
# namespace too. The folly code internally uses "gflags" unless we tell it
# otherwise.
check_cxx_source_compiles("
#include <gflags/gflags.h>
int main() {
gflags::GetArgv();
return 0;
}
"
GFLAGS_NAMESPACE_IS_GFLAGS
)
if (GFLAGS_NAMESPACE_IS_GFLAGS)
set(FOLLY_UNUSUAL_GFLAGS_NAMESPACE OFF)
set(FOLLY_GFLAGS_NAMESPACE gflags)
else()
set(FOLLY_UNUSUAL_GFLAGS_NAMESPACE ON)
set(FOLLY_GFLAGS_NAMESPACE google)
endif()
endif()

Просмотреть файл

@ -1,305 +0,0 @@
function(auto_sources RETURN_VALUE PATTERN SOURCE_SUBDIRS)
if ("${SOURCE_SUBDIRS}" STREQUAL "RECURSE")
SET(PATH ".")
if (${ARGC} EQUAL 4)
list(GET ARGV 3 PATH)
endif ()
endif()
if ("${SOURCE_SUBDIRS}" STREQUAL "RECURSE")
unset(${RETURN_VALUE})
file(GLOB SUBDIR_FILES "${PATH}/${PATTERN}")
list(APPEND ${RETURN_VALUE} ${SUBDIR_FILES})
file(GLOB subdirs RELATIVE ${PATH} ${PATH}/*)
foreach(DIR ${subdirs})
if (IS_DIRECTORY ${PATH}/${DIR})
if (NOT "${DIR}" STREQUAL "CMakeFiles")
file(GLOB_RECURSE SUBDIR_FILES "${PATH}/${DIR}/${PATTERN}")
list(APPEND ${RETURN_VALUE} ${SUBDIR_FILES})
endif()
endif()
endforeach()
else()
file(GLOB ${RETURN_VALUE} "${PATTERN}")
foreach (PATH ${SOURCE_SUBDIRS})
file(GLOB SUBDIR_FILES "${PATH}/${PATTERN}")
list(APPEND ${RETURN_VALUE} ${SUBDIR_FILES})
endforeach()
endif ()
set(${RETURN_VALUE} ${${RETURN_VALUE}} PARENT_SCOPE)
endfunction(auto_sources)
# Remove all files matching a set of patterns, and,
# optionally, not matching a second set of patterns,
# from a set of lists.
#
# Example:
# This will remove all files in the CPP_SOURCES list
# matching "/test/" or "Test.cpp$", but not matching
# "BobTest.cpp$".
# REMOVE_MATCHES_FROM_LISTS(CPP_SOURCES MATCHES "/test/" "Test.cpp$" IGNORE_MATCHES "BobTest.cpp$")
#
# Parameters:
#
# [...]:
# The names of the lists to remove matches from.
#
# [MATCHES ...]:
# The matches to remove from the lists.
#
# [IGNORE_MATCHES ...]:
# The matches not to remove, even if they match
# the main set of matches to remove.
function(REMOVE_MATCHES_FROM_LISTS)
set(LISTS_TO_SEARCH)
set(MATCHES_TO_REMOVE)
set(MATCHES_TO_IGNORE)
set(argumentState 0)
foreach (arg ${ARGN})
if ("x${arg}" STREQUAL "xMATCHES")
set(argumentState 1)
elseif ("x${arg}" STREQUAL "xIGNORE_MATCHES")
set(argumentState 2)
elseif (argumentState EQUAL 0)
list(APPEND LISTS_TO_SEARCH ${arg})
elseif (argumentState EQUAL 1)
list(APPEND MATCHES_TO_REMOVE ${arg})
elseif (argumentState EQUAL 2)
list(APPEND MATCHES_TO_IGNORE ${arg})
else()
message(FATAL_ERROR "Unknown argument state!")
endif()
endforeach()
foreach (theList ${LISTS_TO_SEARCH})
foreach (entry ${${theList}})
foreach (match ${MATCHES_TO_REMOVE})
if (${entry} MATCHES ${match})
set(SHOULD_IGNORE OFF)
foreach (ign ${MATCHES_TO_IGNORE})
if (${entry} MATCHES ${ign})
set(SHOULD_IGNORE ON)
break()
endif()
endforeach()
if (NOT SHOULD_IGNORE)
list(REMOVE_ITEM ${theList} ${entry})
endif()
endif()
endforeach()
endforeach()
set(${theList} ${${theList}} PARENT_SCOPE)
endforeach()
endfunction()
# Automatically create source_group directives for the sources passed in.
function(auto_source_group rootName rootDir)
file(TO_CMAKE_PATH "${rootDir}" rootDir)
string(LENGTH "${rootDir}" rootDirLength)
set(sourceGroups)
foreach (fil ${ARGN})
file(TO_CMAKE_PATH "${fil}" filePath)
string(FIND "${filePath}" "/" rIdx REVERSE)
if (rIdx EQUAL -1)
message(FATAL_ERROR "Unable to locate the final forward slash in '${filePath}'!")
endif()
string(SUBSTRING "${filePath}" 0 ${rIdx} filePath)
string(LENGTH "${filePath}" filePathLength)
string(FIND "${filePath}" "${rootDir}" rIdx)
if (rIdx EQUAL 0)
math(EXPR filePathLength "${filePathLength} - ${rootDirLength}")
string(SUBSTRING "${filePath}" ${rootDirLength} ${filePathLength} fileGroup)
string(REPLACE "/" "\\" fileGroup "${fileGroup}")
set(fileGroup "\\${rootName}${fileGroup}")
list(FIND sourceGroups "${fileGroup}" rIdx)
if (rIdx EQUAL -1)
list(APPEND sourceGroups "${fileGroup}")
source_group("${fileGroup}" REGULAR_EXPRESSION "${filePath}/[^/.]+.(cpp|h)$")
endif()
endif()
endforeach()
endfunction()
# CMake is a pain and doesn't have an easy way to install only the files
# we actually included in our build :(
function(auto_install_files rootName rootDir)
file(TO_CMAKE_PATH "${rootDir}" rootDir)
string(LENGTH "${rootDir}" rootDirLength)
set(sourceGroups)
foreach (fil ${ARGN})
file(TO_CMAKE_PATH "${fil}" filePath)
string(FIND "${filePath}" "/" rIdx REVERSE)
if (rIdx EQUAL -1)
message(FATAL_ERROR "Unable to locate the final forward slash in '${filePath}'!")
endif()
string(SUBSTRING "${filePath}" 0 ${rIdx} filePath)
string(LENGTH "${filePath}" filePathLength)
string(FIND "${filePath}" "${rootDir}" rIdx)
if (rIdx EQUAL 0)
math(EXPR filePathLength "${filePathLength} - ${rootDirLength}")
string(SUBSTRING "${filePath}" ${rootDirLength} ${filePathLength} fileGroup)
install(FILES ${fil}
DESTINATION ${INCLUDE_INSTALL_DIR}/${rootName}${fileGroup})
endif()
endforeach()
endfunction()
function(folly_define_tests)
set(directory_count 0)
set(test_count 0)
set(currentArg 0)
while (currentArg LESS ${ARGC})
if ("x${ARGV${currentArg}}" STREQUAL "xDIRECTORY")
math(EXPR currentArg "${currentArg} + 1")
if (NOT currentArg LESS ${ARGC})
message(FATAL_ERROR "Expected base directory!")
endif()
set(cur_dir ${directory_count})
math(EXPR directory_count "${directory_count} + 1")
set(directory_${cur_dir}_name "${ARGV${currentArg}}")
# We need a single list of sources to get source_group to work nicely.
set(directory_${cur_dir}_source_list)
math(EXPR currentArg "${currentArg} + 1")
while (currentArg LESS ${ARGC})
if ("x${ARGV${currentArg}}" STREQUAL "xDIRECTORY")
break()
elseif ("x${ARGV${currentArg}}" STREQUAL "xTEST")
math(EXPR currentArg "${currentArg} + 1")
if (NOT currentArg LESS ${ARGC})
message(FATAL_ERROR "Expected test name!")
endif()
set(cur_test ${test_count})
math(EXPR test_count "${test_count} + 1")
set(test_${cur_test}_name "${ARGV${currentArg}}")
math(EXPR currentArg "${currentArg} + 1")
set(test_${cur_test}_directory ${cur_dir})
set(test_${cur_test}_content_dir)
set(test_${cur_test}_headers)
set(test_${cur_test}_sources)
set(test_${cur_test}_tag "NONE")
set(argumentState 0)
while (currentArg LESS ${ARGC})
if ("x${ARGV${currentArg}}" STREQUAL "xHEADERS")
set(argumentState 1)
elseif ("x${ARGV${currentArg}}" STREQUAL "xSOURCES")
set(argumentState 2)
elseif ("x${ARGV${currentArg}}" STREQUAL "xCONTENT_DIR")
math(EXPR currentArg "${currentArg} + 1")
if (NOT currentArg LESS ${ARGC})
message(FATAL_ERROR "Expected content directory name!")
endif()
set(test_${cur_test}_content_dir "${ARGV${currentArg}}")
elseif ("x${ARGV${currentArg}}" STREQUAL "xTEST" OR
"x${ARGV${currentArg}}" STREQUAL "xDIRECTORY")
break()
elseif (argumentState EQUAL 0)
if ("x${ARGV${currentArg}}" STREQUAL "xBROKEN")
set(test_${cur_test}_tag "BROKEN")
elseif ("x${ARGV${currentArg}}" STREQUAL "xHANGING")
set(test_${cur_test}_tag "HANGING")
elseif ("x${ARGV${currentArg}}" STREQUAL "xSLOW")
set(test_${cur_test}_tag "SLOW")
elseif ("x${ARGV${currentArg}}" STREQUAL "xWINDOWS_DISABLED")
set(test_${cur_test}_tag "WINDOWS_DISABLED")
else()
message(FATAL_ERROR "Unknown test tag '${ARGV${currentArg}}'!")
endif()
elseif (argumentState EQUAL 1)
list(APPEND test_${cur_test}_headers
"${FOLLY_DIR}/${directory_${cur_dir}_name}${ARGV${currentArg}}"
)
elseif (argumentState EQUAL 2)
list(APPEND test_${cur_test}_sources
"${FOLLY_DIR}/${directory_${cur_dir}_name}${ARGV${currentArg}}"
)
else()
message(FATAL_ERROR "Unknown argument state!")
endif()
math(EXPR currentArg "${currentArg} + 1")
endwhile()
list(APPEND directory_${cur_dir}_source_list
${test_${cur_test}_sources} ${test_${cur_test}_headers})
else()
message(FATAL_ERROR "Unknown argument inside directory '${ARGV${currentArg}}'!")
endif()
endwhile()
else()
message(FATAL_ERROR "Unknown argument '${ARGV${currentArg}}'!")
endif()
endwhile()
set(cur_dir 0)
while (cur_dir LESS directory_count)
source_group("" FILES ${directory_${cur_dir}_source_list})
math(EXPR cur_dir "${cur_dir} + 1")
endwhile()
set(cur_test 0)
while (cur_test LESS test_count)
if ("x${test_${cur_test}_tag}" STREQUAL "xNONE" OR
("x${test_${cur_test}_tag}" STREQUAL "xBROKEN" AND BUILD_BROKEN_TESTS) OR
("x${test_${cur_test}_tag}" STREQUAL "xSLOW" AND BUILD_SLOW_TESTS) OR
("x${test_${cur_test}_tag}" STREQUAL "xHANGING" AND BUILD_HANGING_TESTS) OR
("x${test_${cur_test}_tag}" STREQUAL "xWINDOWS_DISABLED" AND NOT WIN32)
)
set(cur_test_name ${test_${cur_test}_name})
set(cur_dir_name ${directory_${test_${cur_test}_directory}_name})
add_executable(${cur_test_name}
${test_${cur_test}_headers}
${test_${cur_test}_sources}
)
if (HAVE_CMAKE_GTEST)
# If we have CMake's built-in gtest support use it to add each test
# function as a separate test.
gtest_add_tests(TARGET ${cur_test_name}
WORKING_DIRECTORY "${TOP_DIR}"
TEST_PREFIX "${cur_test_name}."
TEST_LIST test_cases)
set_tests_properties(${test_cases} PROPERTIES TIMEOUT 120)
else()
# Otherwise add each test executable as a single test.
add_test(
NAME ${cur_test_name}
COMMAND ${cur_test_name}
WORKING_DIRECTORY "${TOP_DIR}"
)
set_tests_properties(${cur_test_name} PROPERTIES TIMEOUT 120)
endif()
if (NOT "x${test_${cur_test}_content_dir}" STREQUAL "x")
# Copy the content directory to the output directory tree so that
# tests can be run easily from Visual Studio without having to change
# the working directory for each test individually.
file(
COPY "${FOLLY_DIR}/${cur_dir_name}${test_${cur_test}_content_dir}"
DESTINATION "${CMAKE_CURRENT_BINARY_DIR}/folly/${cur_dir_name}${test_${cur_test}_content_dir}"
)
add_custom_command(TARGET ${cur_test_name} POST_BUILD COMMAND
${CMAKE_COMMAND} ARGS -E copy_directory
"${FOLLY_DIR}/${cur_dir_name}${test_${cur_test}_content_dir}"
"$<TARGET_FILE_DIR:${cur_test_name}>/folly/${cur_dir_name}${test_${cur_test}_content_dir}"
COMMENT "Copying test content for ${cur_test_name}" VERBATIM
)
endif()
# Strip the tailing test directory name for the folder name.
string(REPLACE "test/" "" test_dir_name "${cur_dir_name}")
set_property(TARGET ${cur_test_name} PROPERTY FOLDER "Tests/${test_dir_name}")
target_link_libraries(${cur_test_name} PRIVATE folly_test_support)
apply_folly_compile_options_to_target(${cur_test_name})
endif()
math(EXPR cur_test "${cur_test} + 1")
endwhile()
endfunction()

Просмотреть файл

@ -1,84 +0,0 @@
# Generate variables that can be used to help emit a pkg-config file
# using configure_file().
#
# Usage: gen_pkgconfig_vars(VAR_PREFIX target)
#
# This will set two variables in the caller scope:
# ${VAR_PREFIX}_CFLAGS: set to the compile flags computed from the specified
# target
# ${VAR_PREFIX}_PRIVATE_LIBS: set to the linker flags needed for static
# linking computed from the specified target
function(gen_pkgconfig_vars)
if (NOT ${ARGC} EQUAL 2)
message(FATAL_ERROR "gen_pkgconfig_vars() requires exactly 2 arguments")
endif()
set(var_prefix "${ARGV0}")
set(target "${ARGV1}")
get_target_property(target_cflags "${target}" INTERFACE_COMPILE_OPTIONS)
if(target_cflags)
list(APPEND cflags "${target_cflags}")
endif()
get_target_property(
target_inc_dirs "${target}" INTERFACE_INCLUDE_DIRECTORIES)
if(target_inc_dirs)
list(APPEND include_dirs "${target_inc_dirs}")
endif()
get_target_property(target_defns "${target}" INTERFACE_COMPILE_DEFINITIONS)
if(target_defns)
list(APPEND definitions "${target_defns}")
endif()
# The INTERFACE_LINK_LIBRARIES list is unfortunately somewhat awkward to
# process. Entries in this list may be any of
# - target names
# - absolute paths to a library file
# - plain library names that need "-l" prepended
# - other linker flags starting with "-"
#
# Walk through each entry and transform it into the desired arguments
get_target_property(link_libs "${target}" INTERFACE_LINK_LIBRARIES)
if(link_libs)
foreach(lib_arg IN LISTS link_libs)
if(TARGET "${lib_arg}")
# Add any compile options specified in the targets
# INTERFACE_COMPILE_OPTIONS. We don't need to process its
# INTERFACE_LINK_LIBRARIES property, since our INTERFACE_LINK_LIBRARIES
# will already include its entries transitively.
get_target_property(lib_cflags "${lib_arg}" INTERFACE_COMPILE_OPTIONS)
if(lib_cflags)
list(APPEND cflags "${lib_cflags}")
endif()
get_target_property(lib_defs "${lib_arg}"
INTERFACE_COMPILE_DEFINITIONS)
if(lib_defs)
list(APPEND definitions "${lib_defs}")
endif()
elseif(lib_arg MATCHES "^[-/]")
list(APPEND private_libs "${lib_arg}")
else()
list(APPEND private_libs "-l${lib_arg}")
endif()
endforeach()
endif()
list(APPEND cflags "${CMAKE_REQUIRED_FLAGS}")
if(definitions)
list(REMOVE_DUPLICATES definitions)
foreach(def_arg IN LISTS definitions)
list(APPEND cflags "-D${def_arg}")
endforeach()
endif()
if(include_dirs)
list(REMOVE_DUPLICATES include_dirs)
foreach(inc_dir IN LISTS include_dirs)
list(APPEND cflags "-I${inc_dir}")
endforeach()
endif()
# Set the output variables
string(REPLACE ";" " " cflags "${cflags}")
set("${var_prefix}_CFLAGS" "${cflags}" PARENT_SCOPE)
string(REPLACE ";" " " private_libs "${private_libs}")
set("${var_prefix}_PRIVATE_LIBS" "${private_libs}" PARENT_SCOPE)
endfunction()

Просмотреть файл

@ -1,26 +0,0 @@
# CMake configuration file for folly
#
# This provides the Folly::folly target, which you can depend on by adding it
# to your target_link_libraries().
#
# It also defines the following variables, although using these directly is not
# necessary if you use the Folly::folly target instead.
# FOLLY_INCLUDE_DIRS
# FOLLY_LIBRARIES
@PACKAGE_INIT@
set_and_check(FOLLY_INCLUDE_DIR "@PACKAGE_INCLUDE_INSTALL_DIR@")
set_and_check(FOLLY_CMAKE_DIR "@PACKAGE_CMAKE_INSTALL_DIR@")
# Include the folly-targets.cmake file, which is generated from our CMake rules
if (NOT TARGET Folly::folly)
include("${FOLLY_CMAKE_DIR}/folly-targets.cmake")
endif()
# Set FOLLY_LIBRARIES from our Folly::folly target
set(FOLLY_LIBRARIES Folly::folly)
if (NOT folly_FIND_QUIETLY)
message(STATUS "Found folly: ${PACKAGE_PREFIX_DIR}")
endif()

Просмотреть файл

@ -1,80 +0,0 @@
/*
* Copyright 2016 Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#if !defined(FOLLY_MOBILE)
#if defined(__ANDROID__) || \
(defined(__APPLE__) && \
(TARGET_IPHONE_SIMULATOR || TARGET_OS_SIMULATOR || TARGET_OS_IPHONE))
#define FOLLY_MOBILE 1
#else
#define FOLLY_MOBILE 0
#endif
#endif // FOLLY_MOBILE
#cmakedefine FOLLY_HAVE_PTHREAD 1
#cmakedefine FOLLY_HAVE_PTHREAD_ATFORK 1
#cmakedefine FOLLY_HAVE_LIBGFLAGS 1
#cmakedefine FOLLY_UNUSUAL_GFLAGS_NAMESPACE 1
#cmakedefine FOLLY_GFLAGS_NAMESPACE @FOLLY_GFLAGS_NAMESPACE@
#cmakedefine FOLLY_HAVE_LIBGLOG 1
#cmakedefine FOLLY_USE_JEMALLOC 1
#cmakedefine FOLLY_USE_LIBSTDCPP 1
#if __has_include(<features.h>)
#include <features.h>
#endif
#cmakedefine FOLLY_HAVE_MEMRCHR 1
#cmakedefine FOLLY_HAVE_PREADV 1
#cmakedefine FOLLY_HAVE_PWRITEV 1
#cmakedefine FOLLY_HAVE_CLOCK_GETTIME 1
#cmakedefine FOLLY_HAVE_OPENSSL_ASN1_TIME_DIFF 1
#cmakedefine FOLLY_HAVE_IFUNC 1
#cmakedefine FOLLY_HAVE_STD__IS_TRIVIALLY_COPYABLE 1
#cmakedefine FOLLY_HAVE_UNALIGNED_ACCESS 1
#cmakedefine FOLLY_HAVE_VLA 1
#cmakedefine FOLLY_HAVE_WEAK_SYMBOLS 1
#cmakedefine FOLLY_HAVE_LINUX_VDSO 1
#cmakedefine FOLLY_HAVE_MALLOC_USABLE_SIZE 1
#cmakedefine FOLLY_HAVE_INT128_T 1
#cmakedefine FOLLY_SUPPLY_MISSING_INT128_TRAITS 1
#cmakedefine FOLLY_HAVE_WCHAR_SUPPORT 1
#cmakedefine FOLLY_HAVE_EXTRANDOM_SFMT19937 1
#cmakedefine FOLLY_USE_LIBCPP 1
#cmakedefine FOLLY_HAVE_XSI_STRERROR_R 1
#cmakedefine HAVE_VSNPRINTF_ERRORS 1
#cmakedefine FOLLY_USE_SYMBOLIZER 1
#define FOLLY_DEMANGLE_MAX_SYMBOL_SIZE 1024
#cmakedefine FOLLY_HAVE_SHADOW_LOCAL_WARNINGS 1
#cmakedefine FOLLY_HAVE_LIBLZ4 1
#cmakedefine FOLLY_HAVE_LIBLZMA 1
#cmakedefine FOLLY_HAVE_LIBSNAPPY 1
#cmakedefine FOLLY_HAVE_LIBZ 1
#cmakedefine FOLLY_HAVE_LIBZSTD 1
#cmakedefine FOLLY_HAVE_LIBBZ2 1
#cmakedefine FOLLY_ASAN_ENABLED 1
#cmakedefine FOLLY_SUPPORT_SHARED_LIBRARY 1

Просмотреть файл

@ -1,214 +0,0 @@
include(CheckCXXSourceCompiles)
include(CheckIncludeFileCXX)
include(CheckFunctionExists)
find_package(Boost 1.51.0 MODULE
COMPONENTS
context
chrono
date_time
filesystem
program_options
regex
system
thread
REQUIRED
)
list(APPEND FOLLY_LINK_LIBRARIES ${Boost_LIBRARIES})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${Boost_INCLUDE_DIRS})
find_package(DoubleConversion MODULE REQUIRED)
list(APPEND FOLLY_LINK_LIBRARIES ${DOUBLE_CONVERSION_LIBRARY})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${DOUBLE_CONVERSION_INCLUDE_DIR})
set(FOLLY_HAVE_LIBGFLAGS OFF)
find_package(GFlags CONFIG QUIET)
if (gflags_FOUND)
message(STATUS "Found gflags from package config")
set(FOLLY_HAVE_LIBGFLAGS ON)
if (TARGET gflags-shared)
list(APPEND FOLLY_SHINY_DEPENDENCIES gflags-shared)
elseif (TARGET gflags)
list(APPEND FOLLY_SHINY_DEPENDENCIES gflags)
else()
message(FATAL_ERROR "Unable to determine the target name for the GFlags package.")
endif()
list(APPEND CMAKE_REQUIRED_LIBRARIES ${GFLAGS_LIBRARIES})
list(APPEND CMAKE_REQUIRED_INCLUDES ${GFLAGS_INCLUDE_DIR})
else()
find_package(GFlags MODULE)
set(FOLLY_HAVE_LIBGFLAGS ${LIBGFLAGS_FOUND})
list(APPEND FOLLY_LINK_LIBRARIES ${LIBGFLAGS_LIBRARY})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBGFLAGS_INCLUDE_DIR})
list(APPEND CMAKE_REQUIRED_LIBRARIES ${LIBGFLAGS_LIBRARY})
list(APPEND CMAKE_REQUIRED_INCLUDES ${LIBGFLAGS_INCLUDE_DIR})
endif()
set(FOLLY_HAVE_LIBGLOG OFF)
find_package(glog CONFIG QUIET)
if (glog_FOUND)
message(STATUS "Found glog from package config")
set(FOLLY_HAVE_LIBGLOG ON)
list(APPEND FOLLY_SHINY_DEPENDENCIES glog::glog)
else()
find_package(GLog MODULE)
set(FOLLY_HAVE_LIBGLOG ${LIBGLOG_FOUND})
list(APPEND FOLLY_LINK_LIBRARIES ${LIBGLOG_LIBRARY})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBGLOG_INCLUDE_DIR})
endif()
find_package(Libevent CONFIG QUIET)
if(TARGET event)
message(STATUS "Found libevent from package config")
list(APPEND FOLLY_SHINY_DEPENDENCIES event)
else()
find_package(LibEvent MODULE REQUIRED)
list(APPEND FOLLY_LINK_LIBRARIES ${LIBEVENT_LIB})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBEVENT_INCLUDE_DIR})
endif()
find_package(OpenSSL MODULE REQUIRED)
list(APPEND FOLLY_LINK_LIBRARIES ${OPENSSL_LIBRARIES})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${OPENSSL_INCLUDE_DIR})
list(APPEND CMAKE_REQUIRED_LIBRARIES ${OPENSSL_LIBRARIES})
list(APPEND CMAKE_REQUIRED_INCLUDES ${OPENSSL_INCLUDE_DIR})
check_function_exists(ASN1_TIME_diff FOLLY_HAVE_OPENSSL_ASN1_TIME_DIFF)
find_package(ZLIB MODULE)
set(FOLLY_HAVE_LIBZ ${ZLIB_FOUND})
if (ZLIB_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${ZLIB_INCLUDE_DIRS})
list(APPEND FOLLY_LINK_LIBRARIES ${ZLIB_LIBRARIES})
endif()
find_package(BZip2 MODULE)
set(FOLLY_HAVE_LIBBZ2 ${BZIP2_FOUND})
if (BZIP2_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${BZIP2_INCLUDE_DIRS})
list(APPEND FOLLY_LINK_LIBRARIES ${BZIP2_LIBRARIES})
endif()
find_package(LibLZMA MODULE)
set(FOLLY_HAVE_LIBLZMA ${LIBLZMA_FOUND})
if (LIBLZMA_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBLZMA_INCLUDE_DIRS})
list(APPEND FOLLY_LINK_LIBRARIES ${LIBLZMA_LIBRARIES})
endif()
find_package(LZ4 MODULE)
set(FOLLY_HAVE_LIBLZ4 ${LZ4_FOUND})
if (LZ4_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LZ4_INCLUDE_DIR})
list(APPEND FOLLY_LINK_LIBRARIES ${LZ4_LIBRARY})
endif()
find_package(Zstd MODULE)
set(FOLLY_HAVE_LIBZSTD ${ZSTD_FOUND})
if(ZSTD_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${ZSTD_INCLUDE_DIR})
list(APPEND FOLLY_LINK_LIBRARIES ${ZSTD_LIBRARY})
endif()
find_package(Snappy MODULE)
set(FOLLY_HAVE_LIBSNAPPY ${SNAPPY_FOUND})
if (SNAPPY_FOUND)
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${SNAPPY_INCLUDE_DIR})
list(APPEND FOLLY_LINK_LIBRARIES ${SNAPPY_LIBRARY})
endif()
find_package(LibDwarf)
list(APPEND FOLLY_LINK_LIBRARIES ${LIBDWARF_LIBRARIES})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBDWARF_INCLUDE_DIRS})
find_package(Libiberty)
list(APPEND FOLLY_LINK_LIBRARIES ${LIBIBERTY_LIBRARIES})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBIBERTY_INCLUDE_DIRS})
find_package(LibAIO)
list(APPEND FOLLY_LINK_LIBRARIES ${LIBAIO_LIBRARIES})
list(APPEND FOLLY_INCLUDE_DIRECTORIES ${LIBAIO_INCLUDE_DIRS})
list(APPEND FOLLY_LINK_LIBRARIES ${CMAKE_DL_LIBS})
list(APPEND CMAKE_REQUIRED_LIBRARIES ${CMAKE_DL_LIBS})
set(FOLLY_USE_SYMBOLIZER OFF)
CHECK_INCLUDE_FILE_CXX(elf.h FOLLY_HAVE_ELF_H)
find_library(UNWIND_LIBRARIES NAMES unwind)
if (UNWIND_LIBRARIES)
list(APPEND FOLLY_LINK_LIBRARIES ${UNWIND_LIBRARIES})
list(APPEND CMAKE_REQUIRED_LIBRARIES ${UNWIND_LIBRARIES})
endif()
check_function_exists(backtrace FOLLY_HAVE_BACKTRACE)
if (FOLLY_HAVE_ELF_H AND FOLLY_HAVE_BACKTRACE AND LIBDWARF_FOUND)
set(FOLLY_USE_SYMBOLIZER ON)
endif()
message(STATUS "Setting FOLLY_USE_SYMBOLIZER: ${FOLLY_USE_SYMBOLIZER}")
# Using clang with libstdc++ requires explicitly linking against libatomic
check_cxx_source_compiles("
#include <atomic>
int main(int argc, char** argv) {
struct Test { int val; };
std::atomic<Test> s;
return static_cast<int>(s.is_lock_free());
}"
FOLLY_CPP_ATOMIC_BUILTIN
)
if(NOT FOLLY_CPP_ATOMIC_BUILTIN)
list(APPEND CMAKE_REQUIRED_LIBRARIES atomic)
list(APPEND FOLLY_LINK_LIBRARIES atomic)
check_cxx_source_compiles("
#include <atomic>
int main(int argc, char** argv) {
struct Test { int val; };
std::atomic<Test> s2;
return static_cast<int>(s2.is_lock_free());
}"
FOLLY_CPP_ATOMIC_WITH_LIBATOMIC
)
if (NOT FOLLY_CPP_ATOMIC_WITH_LIBATOMIC)
message(
FATAL_ERROR "unable to link C++ std::atomic code: you may need \
to install GNU libatomic"
)
endif()
endif()
option(
FOLLY_ASAN_ENABLED
"Build folly with Address Sanitizer enabled."
OFF
)
if (FOLLY_ASAN_ENABLED)
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES GNU)
set(FOLLY_ASAN_ENABLED ON)
set(FOLLY_ASAN_FLAGS -fsanitize=address,undefined)
list(APPEND FOLLY_CXX_FLAGS ${FOLLY_ASAN_FLAGS})
# All of the functions in folly/detail/Sse.cpp are intended to be compiled
# with ASAN disabled. They are marked with attributes to disable the
# sanitizer, but even so, gcc fails to compile them for some reason when
# sanitization is enabled on the compile line.
set_source_files_properties(
"${CMAKE_SOURCE_DIR}/folly/detail/Sse.cpp"
PROPERTIES COMPILE_FLAGS -fno-sanitize=address,undefined
)
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES Clang)
set(FOLLY_ASAN_ENABLED ON)
set(
FOLLY_ASAN_FLAGS
-fno-common
-fsanitize=address,undefined,integer,nullability
-fno-sanitize=unsigned-integer-overflow
)
list(APPEND FOLLY_CXX_FLAGS ${FOLLY_ASAN_FLAGS})
endif()
endif()
add_library(folly_deps INTERFACE)
list(REMOVE_DUPLICATES FOLLY_INCLUDE_DIRECTORIES)
target_include_directories(folly_deps INTERFACE ${FOLLY_INCLUDE_DIRECTORIES})
target_link_libraries(folly_deps INTERFACE
${FOLLY_LINK_LIBRARIES}
${FOLLY_SHINY_DEPENDENCIES}
${FOLLY_ASAN_FLAGS}
)

Просмотреть файл

@ -1,10 +0,0 @@
prefix=@CMAKE_INSTALL_PREFIX@
libdir=@CMAKE_INSTALL_FULL_LIBDIR@
includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@
Name: libfolly
Description: Facebook (Folly) C++ library
Version: master
Cflags: -I${includedir} @FOLLY_PKGCONFIG_CFLAGS@
Libs: -L${libdir} -lfolly
Libs.private: @FOLLY_PKGCONFIG_PRIVATE_LIBS@

Просмотреть файл

@ -1,731 +0,0 @@
cmake_minimum_required(VERSION 3.0.2 FATAL_ERROR)
# We use the GoogleTest module if it is available (only in CMake 3.9+)
# It requires CMP0054 and CMP0057 to be enabled.
if (POLICY CMP0054)
cmake_policy(SET CMP0054 NEW)
endif()
if (POLICY CMP0057)
cmake_policy(SET CMP0057 NEW)
endif()
# includes
set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMake" ${CMAKE_MODULE_PATH})
# package information
set(PACKAGE_NAME "folly")
set(PACKAGE_VERSION "0.58.0-dev")
set(PACKAGE_STRING "${PACKAGE_NAME} ${PACKAGE_VERSION}")
set(PACKAGE_TARNAME "${PACKAGE_NAME}-${PACKAGE_VERSION}")
set(PACKAGE_BUGREPORT "https://github.com/facebook/folly/issues")
# 150+ tests in the root folder anyone? No? I didn't think so.
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
project(${PACKAGE_NAME} CXX C)
set(INCLUDE_INSTALL_DIR include CACHE STRING
"The subdirectory where header files should be installed")
set(LIB_INSTALL_DIR lib CACHE STRING
"The subdirectory where libraries should be installed")
set(BIN_INSTALL_DIR bin CACHE STRING
"The subdirectory where binaries should be installed")
set(CMAKE_INSTALL_DIR lib/cmake/folly CACHE STRING
"The subdirectory where CMake package config files should be installed")
option(BUILD_SHARED_LIBS
"If enabled, build folly as a shared library. \
This is generally discouraged, since folly does not commit to having \
a stable ABI."
OFF
)
# Mark BUILD_SHARED_LIBS as an "advanced" option, since enabling it
# is generally discouraged.
mark_as_advanced(BUILD_SHARED_LIBS)
set(FOLLY_SUPPORT_SHARED_LIBRARY "${BUILD_SHARED_LIBS}")
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
# Check target architecture
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message(FATAL_ERROR "Folly requires a 64bit target architecture.")
endif()
if (MSVC_VERSION LESS 1900)
message(
FATAL_ERROR
"This build script only supports building Folly on 64-bit Windows with "
"at least Visual Studio 2017. "
"MSVC version '${MSVC_VERSION}' is not supported."
)
endif()
endif()
set(TOP_DIR "${CMAKE_CURRENT_SOURCE_DIR}")
set(FOLLY_DIR "${CMAKE_CURRENT_SOURCE_DIR}/folly")
set(
FOLLY_DIR_PREFIXES
"${CMAKE_CURRENT_SOURCE_DIR}:${CMAKE_CURRENT_BINARY_DIR}"
)
include(GNUInstallDirs)
set(CMAKE_THREAD_PREFER_PTHREAD ON)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads REQUIRED)
set(FOLLY_HAVE_PTHREAD "${CMAKE_USE_PTHREADS_INIT}")
list(APPEND CMAKE_REQUIRED_LIBRARIES Threads::Threads)
list(APPEND FOLLY_LINK_LIBRARIES Threads::Threads)
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
include(FollyCompilerMSVC)
else()
include(FollyCompilerUnix)
endif()
include(FollyFunctions)
include(folly-deps) # Find the required packages
include(FollyConfigChecks)
configure_file(
${CMAKE_CURRENT_SOURCE_DIR}/CMake/folly-config.h.cmake
${CMAKE_CURRENT_BINARY_DIR}/folly/folly-config.h
)
# We currently build the main libfolly library by finding all sources
# and header files. We then exclude specific files below.
#
# In the future it would perhaps be nicer to explicitly list the files we want
# to include, and to move the source lists in to separate per-subdirectory
# CMakeLists.txt files.
auto_sources(files "*.cpp" "RECURSE" "${FOLLY_DIR}")
auto_sources(hfiles "*.h" "RECURSE" "${FOLLY_DIR}")
# Exclude tests, benchmarks, and other standalone utility executables from the
# library sources. Test sources are listed separately below.
REMOVE_MATCHES_FROM_LISTS(files hfiles
MATCHES
"^${FOLLY_DIR}/build/"
"^${FOLLY_DIR}/experimental/exception_tracer/"
"^${FOLLY_DIR}/experimental/hazptr/bench/"
"^${FOLLY_DIR}/experimental/hazptr/example/"
"^${FOLLY_DIR}/experimental/pushmi/"
"^${FOLLY_DIR}/futures/exercises/"
"^${FOLLY_DIR}/logging/example/"
"^${FOLLY_DIR}/(.*/)?test/"
"^${FOLLY_DIR}/tools/"
"Benchmark.cpp$"
"Test.cpp$"
)
list(REMOVE_ITEM files
${FOLLY_DIR}/experimental/JSONSchemaTester.cpp
${FOLLY_DIR}/experimental/io/HugePageUtil.cpp
${FOLLY_DIR}/python/fibers.cpp
${FOLLY_DIR}/python/GILAwareManualExecutor.cpp
)
list(REMOVE_ITEM hfiles
${FOLLY_DIR}/python/fibers.h
${FOLLY_DIR}/python/GILAwareManualExecutor.h
)
# Explicitly include utility library code from inside
# folly/test and folly/io/async/test/
list(APPEND files
${FOLLY_DIR}/io/async/test/ScopedBoundPort.cpp
${FOLLY_DIR}/io/async/test/SocketPair.cpp
${FOLLY_DIR}/io/async/test/TimeUtil.cpp
)
list(APPEND hfiles
${FOLLY_DIR}/io/async/test/AsyncSSLSocketTest.h
${FOLLY_DIR}/io/async/test/AsyncSocketTest.h
${FOLLY_DIR}/io/async/test/AsyncSocketTest2.h
${FOLLY_DIR}/io/async/test/BlockingSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncServerSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncSSLSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncTransport.h
${FOLLY_DIR}/io/async/test/MockAsyncUDPSocket.h
${FOLLY_DIR}/io/async/test/MockTimeoutManager.h
${FOLLY_DIR}/io/async/test/ScopedBoundPort.h
${FOLLY_DIR}/io/async/test/SocketPair.h
${FOLLY_DIR}/io/async/test/TestSSLServer.h
${FOLLY_DIR}/io/async/test/TimeUtil.h
${FOLLY_DIR}/io/async/test/UndelayedDestruction.h
${FOLLY_DIR}/io/async/test/Util.h
${FOLLY_DIR}/test/TestUtils.h
)
# Exclude specific sources if we do not have third-party libraries
# required to build them.
if (NOT FOLLY_USE_SYMBOLIZER)
REMOVE_MATCHES_FROM_LISTS(files hfiles
MATCHES
"^${FOLLY_DIR}/experimental/symbolizer/"
)
list(REMOVE_ITEM files
${FOLLY_DIR}/SingletonStackTrace.cpp
)
endif()
if (NOT ${LIBAIO_FOUND})
list(REMOVE_ITEM files
${FOLLY_DIR}/experimental/io/AsyncIO.cpp
)
list(REMOVE_ITEM hfiles
${FOLLY_DIR}/experimental/io/AsyncIO.h
)
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
list(REMOVE_ITEM files
${FOLLY_DIR}/Poly.cpp
${FOLLY_DIR}/Subprocess.cpp
)
list(REMOVE_ITEM hfiles
${FOLLY_DIR}/Poly.h
${FOLLY_DIR}/Poly-inl.h
${FOLLY_DIR}/detail/PolyDetail.h
${FOLLY_DIR}/detail/TypeList.h
${FOLLY_DIR}/poly/Nullable.h
${FOLLY_DIR}/poly/Regular.h
)
endif()
add_library(folly_base OBJECT
${files} ${hfiles}
${CMAKE_CURRENT_BINARY_DIR}/folly/folly-config.h
)
auto_source_group(folly ${FOLLY_DIR} ${files} ${hfiles})
apply_folly_compile_options_to_target(folly_base)
# Add the generated files to the correct source group.
source_group("folly" FILES ${CMAKE_CURRENT_BINARY_DIR}/folly/folly-config.h)
# Generate pkg-config variables from folly_deps before we add our own
# build/install-time include directory generator expressions
include(GenPkgConfig)
gen_pkgconfig_vars(FOLLY_PKGCONFIG folly_deps)
target_include_directories(folly_deps
INTERFACE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_BINARY_DIR}>
$<INSTALL_INTERFACE:include>
)
target_include_directories(folly_base
PUBLIC
$<TARGET_PROPERTY:folly_deps,INTERFACE_INCLUDE_DIRECTORIES>
)
target_compile_definitions(folly_base
PUBLIC
$<TARGET_PROPERTY:folly_deps,INTERFACE_COMPILE_DEFINITIONS>
)
add_library(folly
$<TARGET_OBJECTS:folly_base>
)
apply_folly_compile_options_to_target(folly)
target_link_libraries(folly PUBLIC folly_deps)
install(TARGETS folly folly_deps
EXPORT folly
RUNTIME DESTINATION bin
LIBRARY DESTINATION ${LIB_INSTALL_DIR}
ARCHIVE DESTINATION ${LIB_INSTALL_DIR})
auto_install_files(folly ${FOLLY_DIR}
${hfiles}
)
install(
FILES ${CMAKE_CURRENT_BINARY_DIR}/folly/folly-config.h
DESTINATION ${INCLUDE_INSTALL_DIR}/folly
COMPONENT dev
)
# Generate the folly-config.cmake file for installation so that
# downstream projects that use on folly can easily depend on it in their CMake
# files using "find_package(folly CONFIG)"
include(CMakePackageConfigHelpers)
configure_package_config_file(
CMake/folly-config.cmake.in
folly-config.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_DIR}
PATH_VARS
INCLUDE_INSTALL_DIR
CMAKE_INSTALL_DIR
)
install(
FILES ${CMAKE_CURRENT_BINARY_DIR}/folly-config.cmake
DESTINATION ${CMAKE_INSTALL_DIR}
COMPONENT dev
)
install(
EXPORT folly
DESTINATION ${CMAKE_INSTALL_DIR}
NAMESPACE Folly::
FILE folly-targets.cmake
COMPONENT dev
)
# Generate a pkg-config file so that downstream projects that don't use
# CMake can depend on folly using pkg-config.
configure_file(
${CMAKE_CURRENT_SOURCE_DIR}/CMake/libfolly.pc.in
${CMAKE_CURRENT_BINARY_DIR}/libfolly.pc
@ONLY
)
install(
FILES ${CMAKE_CURRENT_BINARY_DIR}/libfolly.pc
DESTINATION ${LIB_INSTALL_DIR}/pkgconfig
COMPONENT dev
)
option(BUILD_TESTS "If enabled, compile the tests." OFF)
option(BUILD_BROKEN_TESTS "If enabled, compile tests that are known to be broken." OFF)
option(BUILD_HANGING_TESTS "If enabled, compile tests that are known to hang." OFF)
option(BUILD_SLOW_TESTS "If enabled, compile tests that take a while to run in debug mode." OFF)
if (BUILD_TESTS)
option(USE_CMAKE_GOOGLE_TEST_INTEGRATION "If enabled, use the google test integration included in CMake." ON)
find_package(GMock MODULE REQUIRED)
if (USE_CMAKE_GOOGLE_TEST_INTEGRATION)
include(GoogleTest OPTIONAL RESULT_VARIABLE HAVE_CMAKE_GTEST)
enable_testing()
else()
set(HAVE_CMAKE_GTEST OFF)
endif()
# The ThreadLocalTest code uses a helper shared library for one of its tests.
# This can only be built if folly itself was built as a shared library.
if (BUILD_SHARED_LIBS)
add_library(thread_local_test_lib MODULE
${FOLLY_DIR}/test/ThreadLocalTestLib.cpp
)
set_target_properties(thread_local_test_lib PROPERTIES PREFIX "")
apply_folly_compile_options_to_target(thread_local_test_lib)
target_link_libraries(thread_local_test_lib PUBLIC folly)
target_include_directories(
thread_local_test_lib
PUBLIC ${CMAKE_CURRENT_BINARY_DIR})
endif()
add_library(folly_test_support
${FOLLY_DIR}/test/common/TestMain.cpp
${FOLLY_DIR}/test/DeterministicSchedule.cpp
${FOLLY_DIR}/test/DeterministicSchedule.h
${FOLLY_DIR}/test/SingletonTestStructs.cpp
${FOLLY_DIR}/test/SocketAddressTestHelper.cpp
${FOLLY_DIR}/test/SocketAddressTestHelper.h
${FOLLY_DIR}/experimental/test/CodingTestUtils.cpp
${FOLLY_DIR}/logging/test/ConfigHelpers.cpp
${FOLLY_DIR}/logging/test/ConfigHelpers.h
${FOLLY_DIR}/logging/test/TestLogHandler.cpp
${FOLLY_DIR}/logging/test/TestLogHandler.h
${FOLLY_DIR}/futures/test/TestExecutor.cpp
${FOLLY_DIR}/futures/test/TestExecutor.h
${FOLLY_DIR}/io/async/test/BlockingSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncServerSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncSSLSocket.h
${FOLLY_DIR}/io/async/test/MockAsyncTransport.h
${FOLLY_DIR}/io/async/test/MockAsyncUDPSocket.h
${FOLLY_DIR}/io/async/test/MockTimeoutManager.h
${FOLLY_DIR}/io/async/test/ScopedBoundPort.cpp
${FOLLY_DIR}/io/async/test/ScopedBoundPort.h
${FOLLY_DIR}/io/async/test/SocketPair.cpp
${FOLLY_DIR}/io/async/test/SocketPair.h
${FOLLY_DIR}/io/async/test/TestSSLServer.cpp
${FOLLY_DIR}/io/async/test/TestSSLServer.h
${FOLLY_DIR}/io/async/test/TimeUtil.cpp
${FOLLY_DIR}/io/async/test/TimeUtil.h
${FOLLY_DIR}/io/async/test/UndelayedDestruction.h
${FOLLY_DIR}/io/async/test/Util.h
)
target_compile_definitions(folly_test_support
PUBLIC
${LIBGMOCK_DEFINES}
)
target_include_directories(folly_test_support
SYSTEM
PUBLIC
${LIBGMOCK_INCLUDE_DIR}
)
target_link_libraries(folly_test_support
PUBLIC
${BOOST_LIBRARIES}
follybenchmark
folly
${LIBGMOCK_LIBRARIES}
)
apply_folly_compile_options_to_target(folly_test_support)
folly_define_tests(
DIRECTORY chrono/test/
TEST chrono_conv_test WINDOWS_DISABLED
SOURCES ConvTest.cpp
DIRECTORY compression/test/
TEST compression_test SLOW SOURCES CompressionTest.cpp
DIRECTORY container/test/
TEST access_test SOURCES AccessTest.cpp
TEST array_test SOURCES ArrayTest.cpp
TEST bit_iterator_test SOURCES BitIteratorTest.cpp
# TODO: CMake's gtest_add_tests() function currently chokes on
# EnumerateTest.cpp since it uses macros to define tests.
#TEST enumerate_test SOURCES EnumerateTest.cpp
TEST evicting_cache_map_test SOURCES EvictingCacheMapTest.cpp
TEST f14_fwd_test SOURCES F14FwdTest.cpp
TEST f14_map_test SOURCES F14MapTest.cpp
TEST f14_set_test SOURCES F14SetTest.cpp
TEST foreach_test SOURCES ForeachTest.cpp
TEST merge_test SOURCES MergeTest.cpp
TEST sparse_byte_set_test SOURCES SparseByteSetTest.cpp
DIRECTORY concurrency/test/
TEST atomic_shared_ptr_test SOURCES AtomicSharedPtrTest.cpp
TEST cache_locality_test SOURCES CacheLocalityTest.cpp
TEST core_cached_shared_ptr_test SOURCES CoreCachedSharedPtrTest.cpp
TEST concurrent_hash_map_test SOURCES ConcurrentHashMapTest.cpp
TEST dynamic_bounded_queue_test WINDOWS_DISABLED
SOURCES DynamicBoundedQueueTest.cpp
TEST unbounded_queue_test SOURCES UnboundedQueueTest.cpp
DIRECTORY detail/test/
TEST static_singleton_manager_test SOURCES StaticSingletonManagerTest.cpp
DIRECTORY executors/test/
TEST async_helpers_test SOURCES AsyncTest.cpp
TEST codel_test SOURCES CodelTest.cpp
TEST executor_test SOURCES ExecutorTest.cpp
TEST fiber_io_executor_test SOURCES FiberIOExecutorTest.cpp
TEST global_executor_test SOURCES GlobalExecutorTest.cpp
TEST serial_executor_test SOURCES SerialExecutorTest.cpp
TEST thread_pool_executor_test WINDOWS_DISABLED
SOURCES ThreadPoolExecutorTest.cpp
TEST threaded_executor_test SOURCES ThreadedExecutorTest.cpp
TEST timed_drivable_executor_test SOURCES TimedDrivableExecutorTest.cpp
DIRECTORY executors/task_queue/test/
TEST unbounded_blocking_queue_test SOURCES UnboundedBlockingQueueTest.cpp
DIRECTORY experimental/test/
TEST autotimer_test SOURCES AutoTimerTest.cpp
TEST bits_test_2 SOURCES BitsTest.cpp
TEST bitvector_test SOURCES BitVectorCodingTest.cpp
TEST dynamic_parser_test SOURCES DynamicParserTest.cpp
TEST eliasfano_test SOURCES EliasFanoCodingTest.cpp
TEST event_count_test SOURCES EventCountTest.cpp
# FunctionSchedulerTest has a lot of timing-dependent checks,
# and tends to fail on heavily loaded systems.
TEST function_scheduler_test BROKEN SOURCES FunctionSchedulerTest.cpp
TEST future_dag_test SOURCES FutureDAGTest.cpp
TEST json_schema_test SOURCES JSONSchemaTest.cpp
TEST lock_free_ring_buffer_test SOURCES LockFreeRingBufferTest.cpp
#TEST nested_command_line_app_test SOURCES NestedCommandLineAppTest.cpp
#TEST program_options_test SOURCES ProgramOptionsTest.cpp
# Depends on liburcu
#TEST read_mostly_shared_ptr_test SOURCES ReadMostlySharedPtrTest.cpp
#TEST ref_count_test SOURCES RefCountTest.cpp
TEST select64_test SOURCES Select64Test.cpp
TEST stringkeyed_test SOURCES StringKeyedTest.cpp
TEST test_util_test SOURCES TestUtilTest.cpp
TEST tuple_ops_test SOURCES TupleOpsTest.cpp
DIRECTORY experimental/io/test/
# Depends on libaio
#TEST async_io_test SOURCES AsyncIOTest.cpp
TEST fs_util_test SOURCES FsUtilTest.cpp
DIRECTORY logging/test/
TEST async_file_writer_test SOURCES AsyncFileWriterTest.cpp
TEST config_parser_test SOURCES ConfigParserTest.cpp
TEST config_update_test SOURCES ConfigUpdateTest.cpp
TEST file_handler_factory_test SOURCES FileHandlerFactoryTest.cpp
TEST glog_formatter_test SOURCES GlogFormatterTest.cpp
TEST immediate_file_writer_test SOURCES ImmediateFileWriterTest.cpp
TEST log_category_test SOURCES LogCategoryTest.cpp
TEST logger_db_test SOURCES LoggerDBTest.cpp
TEST logger_test SOURCES LoggerTest.cpp
TEST log_level_test SOURCES LogLevelTest.cpp
TEST log_message_test SOURCES LogMessageTest.cpp
TEST log_name_test SOURCES LogNameTest.cpp
TEST log_stream_test SOURCES LogStreamTest.cpp
TEST printf_test SOURCES PrintfTest.cpp
TEST rate_limiter_test SOURCES RateLimiterTest.cpp
TEST standard_log_handler_test SOURCES StandardLogHandlerTest.cpp
TEST xlog_test
HEADERS
XlogHeader1.h
XlogHeader2.h
SOURCES
XlogFile1.cpp
XlogFile2.cpp
XlogTest.cpp
DIRECTORY fibers/test/
TEST fibers_test SOURCES FibersTest.cpp
DIRECTORY functional/test/
TEST apply_tuple_test WINDOWS_DISABLED
SOURCES ApplyTupleTest.cpp
TEST partial_test SOURCES PartialTest.cpp
DIRECTORY futures/test/
TEST barrier_test SOURCES BarrierTest.cpp
TEST callback_lifetime_test SOURCES CallbackLifetimeTest.cpp
TEST collect_test SOURCES CollectTest.cpp
TEST context_test SOURCES ContextTest.cpp
TEST core_test SOURCES CoreTest.cpp
TEST ensure_test SOURCES EnsureTest.cpp
TEST filter_test SOURCES FilterTest.cpp
TEST future_splitter_test SOURCES FutureSplitterTest.cpp
TEST future_test WINDOWS_DISABLED
SOURCES FutureTest.cpp
TEST header_compile_test SOURCES HeaderCompileTest.cpp
TEST interrupt_test SOURCES InterruptTest.cpp
TEST map_test SOURCES MapTest.cpp
TEST non_copyable_lambda_test SOURCES NonCopyableLambdaTest.cpp
TEST poll_test SOURCES PollTest.cpp
TEST promise_test SOURCES PromiseTest.cpp
TEST reduce_test SOURCES ReduceTest.cpp
TEST retrying_test SOURCES RetryingTest.cpp
TEST self_destruct_test SOURCES SelfDestructTest.cpp
TEST shared_promise_test SOURCES SharedPromiseTest.cpp
TEST test_executor_test SOURCES TestExecutorTest.cpp
TEST then_compile_test
HEADERS
ThenCompileTest.h
SOURCES
ThenCompileTest.cpp
TEST then_test SOURCES ThenTest.cpp
TEST timekeeper_test SOURCES TimekeeperTest.cpp
TEST times_test SOURCES TimesTest.cpp
TEST unwrap_test SOURCES UnwrapTest.cpp
TEST via_test SOURCES ViaTest.cpp
TEST wait_test SOURCES WaitTest.cpp
TEST when_test SOURCES WhenTest.cpp
TEST while_do_test SOURCES WhileDoTest.cpp
TEST will_equal_test SOURCES WillEqualTest.cpp
TEST window_test WINDOWS_DISABLED
SOURCES WindowTest.cpp
DIRECTORY gen/test/
# MSVC bug can't resolve initializer_list constructor properly
#TEST base_test SOURCES BaseTest.cpp
TEST combine_test SOURCES CombineTest.cpp
TEST parallel_map_test SOURCES ParallelMapTest.cpp
TEST parallel_test SOURCES ParallelTest.cpp
DIRECTORY hash/test/
TEST checksum_test SOURCES ChecksumTest.cpp
TEST hash_test WINDOWS_DISABLED
SOURCES HashTest.cpp
TEST spooky_hash_v1_test SOURCES SpookyHashV1Test.cpp
TEST spooky_hash_v2_test SOURCES SpookyHashV2Test.cpp
DIRECTORY io/test/
TEST iobuf_test SOURCES IOBufTest.cpp
TEST iobuf_cursor_test SOURCES IOBufCursorTest.cpp
TEST iobuf_queue_test SOURCES IOBufQueueTest.cpp
TEST record_io_test SOURCES RecordIOTest.cpp
TEST ShutdownSocketSetTest HANGING
SOURCES ShutdownSocketSetTest.cpp
DIRECTORY io/async/test/
# A number of tests in the async_test binary are unfortunately flaky.
# When run under Travis CI a number of the tests also hang (it looks
# like they do not get expected socket accept events, causing them
# to never break out of their event loops).
TEST async_test BROKEN
CONTENT_DIR certs/
HEADERS
AsyncSocketTest.h
AsyncSSLSocketTest.h
SOURCES
AsyncPipeTest.cpp
AsyncSocketExceptionTest.cpp
AsyncSocketTest.cpp
AsyncSocketTest2.cpp
AsyncSSLSocketTest.cpp
AsyncSSLSocketTest2.cpp
AsyncSSLSocketWriteTest.cpp
AsyncTransportTest.cpp
# This is disabled because it depends on things that don't exist
# on Windows.
#EventHandlerTest.cpp
# The async signal handler is not supported on Windows.
#AsyncSignalHandlerTest.cpp
TEST async_timeout_test SOURCES AsyncTimeoutTest.cpp
TEST AsyncUDPSocketTest SOURCES AsyncUDPSocketTest.cpp
TEST DelayedDestructionTest SOURCES DelayedDestructionTest.cpp
TEST DelayedDestructionBaseTest SOURCES DelayedDestructionBaseTest.cpp
TEST DestructorCheckTest SOURCES DestructorCheckTest.cpp
TEST EventBaseTest SOURCES EventBaseTest.cpp
TEST EventBaseLocalTest SOURCES EventBaseLocalTest.cpp
TEST HHWheelTimerTest SOURCES HHWheelTimerTest.cpp
TEST HHWheelTimerSlowTests SLOW
SOURCES HHWheelTimerSlowTests.cpp
TEST NotificationQueueTest SOURCES NotificationQueueTest.cpp
TEST RequestContextTest SOURCES RequestContextTest.cpp
TEST ScopedEventBaseThreadTest SOURCES ScopedEventBaseThreadTest.cpp
TEST ssl_session_test
CONTENT_DIR certs/
SOURCES
SSLSessionTest.cpp
TEST writechain_test SOURCES WriteChainAsyncTransportWrapperTest.cpp
DIRECTORY io/async/ssl/test/
TEST ssl_errors_test SOURCES SSLErrorsTest.cpp
DIRECTORY lang/test/
TEST bits_test SOURCES BitsTest.cpp
TEST cold_class_test SOURCES ColdClassTest.cpp
TEST safe_assert_test SOURCES SafeAssertTest.cpp
DIRECTORY memory/test/
TEST arena_test SOURCES ArenaTest.cpp
TEST thread_cached_arena_test WINDOWS_DISABLED
SOURCES ThreadCachedArenaTest.cpp
TEST mallctl_helper_test SOURCES MallctlHelperTest.cpp
DIRECTORY portability/test/
TEST constexpr_test SOURCES ConstexprTest.cpp
TEST libgen-test SOURCES LibgenTest.cpp
TEST openssl_portability_test SOURCES OpenSSLPortabilityTest.cpp
TEST pthread_test SOURCES PThreadTest.cpp
TEST time-test SOURCES TimeTest.cpp
DIRECTORY ssl/test/
TEST openssl_hash_test SOURCES OpenSSLHashTest.cpp
DIRECTORY stats/test/
TEST buffered_stat_test SOURCES BufferedStatTest.cpp
TEST digest_builder_test SOURCES DigestBuilderTest.cpp
TEST histogram_test SOURCES HistogramTest.cpp
TEST quantile_estimator_test SOURCES QuantileEstimatorTest.cpp
TEST sliding_window_test SOURCES SlidingWindowTest.cpp
TEST tdigest_test SOURCES TDigestTest.cpp
TEST timeseries_histogram_test SOURCES TimeseriesHistogramTest.cpp
TEST timeseries_test SOURCES TimeSeriesTest.cpp
DIRECTORY synchronization/test/
TEST baton_test SOURCES BatonTest.cpp
TEST call_once_test SOURCES CallOnceTest.cpp
TEST lifo_sem_test SOURCES LifoSemTests.cpp
TEST rw_spin_lock_test SOURCES RWSpinLockTest.cpp
DIRECTORY system/test/
TEST memory_mapping_test SOURCES MemoryMappingTest.cpp
TEST shell_test SOURCES ShellTest.cpp
#TEST subprocess_test SOURCES SubprocessTest.cpp
TEST thread_id_test SOURCES ThreadIdTest.cpp
TEST thread_name_test SOURCES ThreadNameTest.cpp
DIRECTORY synchronization/test/
TEST atomic_struct_test SOURCES AtomicStructTest.cpp
TEST small_locks_test SOURCES SmallLocksTest.cpp
TEST atomic_util_test SOURCES AtomicUtilTest.cpp
DIRECTORY test/
TEST ahm_int_stress_test SOURCES AHMIntStressTest.cpp
TEST arena_smartptr_test SOURCES ArenaSmartPtrTest.cpp
TEST ascii_check_test SOURCES AsciiCaseInsensitiveTest.cpp
TEST atomic_bit_set_test SOURCES AtomicBitSetTest.cpp
TEST atomic_hash_array_test SOURCES AtomicHashArrayTest.cpp
TEST atomic_hash_map_test HANGING
SOURCES AtomicHashMapTest.cpp
TEST atomic_linked_list_test SOURCES AtomicLinkedListTest.cpp
TEST atomic_unordered_map_test SOURCES AtomicUnorderedMapTest.cpp
TEST cacheline_padded_test SOURCES CachelinePaddedTest.cpp
TEST clock_gettime_wrappers_test SOURCES ClockGettimeWrappersTest.cpp
TEST concurrent_skip_list_test SOURCES ConcurrentSkipListTest.cpp
TEST conv_test SOURCES ConvTest.cpp
TEST cpu_id_test SOURCES CpuIdTest.cpp
TEST demangle_test SOURCES DemangleTest.cpp
TEST deterministic_schedule_test SOURCES DeterministicScheduleTest.cpp
TEST discriminated_ptr_test SOURCES DiscriminatedPtrTest.cpp
TEST dynamic_test SOURCES DynamicTest.cpp
TEST dynamic_converter_test SOURCES DynamicConverterTest.cpp
TEST dynamic_other_test SOURCES DynamicOtherTest.cpp
TEST endian_test SOURCES EndianTest.cpp
TEST exception_test SOURCES ExceptionTest.cpp
TEST exception_wrapper_test SOURCES ExceptionWrapperTest.cpp
TEST expected_test SOURCES ExpectedTest.cpp
TEST fbvector_test SOURCES FBVectorTest.cpp
TEST file_test SOURCES FileTest.cpp
# Open-source linux build can't handle running this.
#TEST file_lock_test SOURCES FileLockTest.cpp
TEST file_util_test HANGING
SOURCES FileUtilTest.cpp
TEST fingerprint_test SOURCES FingerprintTest.cpp
TEST format_other_test SOURCES FormatOtherTest.cpp
TEST format_test SOURCES FormatTest.cpp
TEST function_test BROKEN
SOURCES FunctionTest.cpp
TEST function_ref_test SOURCES FunctionRefTest.cpp
TEST futex_test SOURCES FutexTest.cpp
TEST glog_test SOURCES GLogTest.cpp
TEST group_varint_test SOURCES GroupVarintTest.cpp
TEST group_varint_test_ssse3 SOURCES GroupVarintTest.cpp
TEST has_member_fn_traits_test SOURCES HasMemberFnTraitsTest.cpp
TEST iterators_test SOURCES IteratorsTest.cpp
TEST indestructible_test SOURCES IndestructibleTest.cpp
TEST indexed_mem_pool_test BROKEN
SOURCES IndexedMemPoolTest.cpp
# MSVC Preprocessor stringizing raw string literals bug
#TEST json_test SOURCES JsonTest.cpp
TEST json_pointer_test SOURCES json_pointer_test.cpp
TEST json_patch_test SOURCES json_patch_test.cpp
TEST json_other_test
CONTENT_DIR json_test_data/
SOURCES
JsonOtherTest.cpp
TEST lazy_test SOURCES LazyTest.cpp
TEST lock_traits_test SOURCES LockTraitsTest.cpp
TEST locks_test SOURCES SpinLockTest.cpp
TEST math_test SOURCES MathTest.cpp
TEST map_util_test SOURCES MapUtilTest.cpp
TEST memcpy_test SOURCES MemcpyTest.cpp
TEST memory_idler_test SOURCES MemoryIdlerTest.cpp
TEST memory_test WINDOWS_DISABLED
SOURCES MemoryTest.cpp
TEST move_wrapper_test SOURCES MoveWrapperTest.cpp
TEST mpmc_pipeline_test SOURCES MPMCPipelineTest.cpp
TEST mpmc_queue_test SLOW
SOURCES MPMCQueueTest.cpp
TEST network_address_test HANGING
SOURCES
IPAddressTest.cpp
MacAddressTest.cpp
SocketAddressTest.cpp
TEST optional_test SOURCES OptionalTest.cpp
TEST packed_sync_ptr_test HANGING
SOURCES PackedSyncPtrTest.cpp
TEST padded_test SOURCES PaddedTest.cpp
#TEST poly_test SOURCES PolyTest.cpp
TEST portability_test SOURCES PortabilityTest.cpp
TEST producer_consumer_queue_test SLOW
SOURCES ProducerConsumerQueueTest.cpp
TEST random_test SOURCES RandomTest.cpp
TEST range_test SOURCES RangeTest.cpp
TEST scope_guard_test SOURCES ScopeGuardTest.cpp
# Heavily dependent on drand and srand48
#TEST shared_mutex_test SOURCES SharedMutexTest.cpp
# SingletonTest requires Subprocess
#TEST singleton_test SOURCES SingletonTest.cpp
TEST singleton_test_global SOURCES SingletonTestGlobal.cpp
TEST singleton_thread_local_test SOURCES SingletonThreadLocalTest.cpp
TEST small_vector_test WINDOWS_DISABLED
SOURCES small_vector_test.cpp
TEST sorted_vector_types_test SOURCES sorted_vector_test.cpp
TEST string_test SOURCES StringTest.cpp
TEST synchronized_test WINDOWS_DISABLED
SOURCES SynchronizedTest.cpp
TEST thread_cached_int_test SOURCES ThreadCachedIntTest.cpp
TEST thread_local_test SOURCES ThreadLocalTest.cpp
TEST timeout_queue_test SOURCES TimeoutQueueTest.cpp
TEST token_bucket_test SOURCES TokenBucketTest.cpp
TEST traits_test SOURCES TraitsTest.cpp
TEST try_test SOURCES TryTest.cpp
TEST unit_test SOURCES UnitTest.cpp
TEST uri_test SOURCES UriTest.cpp
TEST varint_test SOURCES VarintTest.cpp
)
endif()
add_subdirectory(folly)

Просмотреть файл

@ -1,5 +0,0 @@
# Code of Conduct
Facebook has adopted a Code of Conduct that we expect project participants to
adhere to. Please [read the full text](https://code.facebook.com/codeofconduct)
so that you can understand what actions will and will not be tolerated.

Просмотреть файл

@ -1,33 +0,0 @@
# Contributing to Folly
We want to make contributing to this project as easy and transparent as
possible.
## Code of Conduct
The code of conduct is described in [`CODE_OF_CONDUCT.md`](CODE_OF_CONDUCT.md).
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `master`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. If you haven't already, complete the Contributor License Agreement ("CLA").
## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Facebook's open source projects.
Complete your CLA here: <https://code.facebook.com/cla>
## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.
Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.
## License
By contributing to folly, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree.

Просмотреть файл

@ -1,177 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

Просмотреть файл

@ -1,246 +0,0 @@
Folly: Facebook Open-source Library
-----------------------------------
[![Build Status](https://travis-ci.org/facebook/folly.svg?branch=master)](https://travis-ci.org/facebook/folly)
### What is `folly`?
Folly (acronymed loosely after Facebook Open Source Library) is a
library of C++14 components designed with practicality and efficiency
in mind. **Folly contains a variety of core library components used extensively
at Facebook**. In particular, it's often a dependency of Facebook's other
open source C++ efforts and place where those projects can share code.
It complements (as opposed to competing against) offerings
such as Boost and of course `std`. In fact, we embark on defining our
own component only when something we need is either not available, or
does not meet the needed performance profile. We endeavor to remove
things from folly if or when `std` or Boost obsoletes them.
Performance concerns permeate much of Folly, sometimes leading to
designs that are more idiosyncratic than they would otherwise be (see
e.g. `PackedSyncPtr.h`, `SmallLocks.h`). Good performance at large
scale is a unifying theme in all of Folly.
### Logical Design
Folly is a collection of relatively independent components, some as
simple as a few symbols. There is no restriction on internal
dependencies, meaning that a given folly module may use any other
folly components.
All symbols are defined in the top-level namespace `folly`, except of
course macros. Macro names are ALL_UPPERCASE and should be prefixed
with `FOLLY_`. Namespace `folly` defines other internal namespaces
such as `internal` or `detail`. User code should not depend on symbols
in those namespaces.
Folly has an `experimental` directory as well. This designation connotes
primarily that we feel the API may change heavily over time. This code,
typically, is still in heavy use and is well tested.
### Physical Design
At the top level Folly uses the classic "stuttering" scheme
`folly/folly` used by Boost and others. The first directory serves as
an installation root of the library (with possible versioning a la
`folly-1.0/`), and the second is to distinguish the library when
including files, e.g. `#include <folly/FBString.h>`.
The directory structure is flat (mimicking the namespace structure),
i.e. we don't have an elaborate directory hierarchy (it is possible
this will change in future versions). The subdirectory `experimental`
contains files that are used inside folly and possibly at Facebook but
not considered stable enough for client use. Your code should not use
files in `folly/experimental` lest it may break when you update Folly.
The `folly/folly/test` subdirectory includes the unittests for all
components, usually named `ComponentXyzTest.cpp` for each
`ComponentXyz.*`. The `folly/folly/docs` directory contains
documentation.
### What's in it?
Because of folly's fairly flat structure, the best way to see what's in it
is to look at the headers in [top level `folly/` directory](https://github.com/facebook/folly/tree/master/folly). You can also
check the [`docs` folder](folly/docs) for documentation, starting with the
[overview](folly/docs/Overview.md).
Folly is published on Github at https://github.com/facebook/folly
### Build Notes
#### Dependencies
folly requires gcc 4.9+ and a version of boost compiled with C++14 support.
googletest is required to build and run folly's tests. You can download
it from https://github.com/google/googletest/archive/release-1.8.0.tar.gz
The following commands can be used to download and install it:
```
wget https://github.com/google/googletest/archive/release-1.8.0.tar.gz && \
tar zxf release-1.8.0.tar.gz && \
rm -f release-1.8.0.tar.gz && \
cd googletest-release-1.8.0 && \
cmake . && \
make && \
make install
```
#### Finding dependencies in non-default locations
If you have boost, gtest, or other dependencies installed in a non-default
location, you can use the `CMAKE_INCLUDE_PATH` and `CMAKE_LIBRARY_PATH`
variables to make CMAKE look also look for header files and libraries in
non-standard locations. For example, to also search the directories
`/alt/include/path1` and `/alt/include/path2` for header files and the
directories `/alt/lib/path1` and `/alt/lib/path2` for libraries, you can invoke
`cmake` as follows:
```
cmake \
-DCMAKE_INCLUDE_PATH=/alt/include/path1:/alt/include/path2 \
-DCMAKE_LIBRARY_PATH=/alt/lib/path1:/alt/lib/path2 ...
```
#### Ubuntu 16.04 LTS
The following packages are required (feel free to cut and paste the apt-get
command below):
```
sudo apt-get install \
g++ \
cmake \
libboost-all-dev \
libevent-dev \
libdouble-conversion-dev \
libgoogle-glog-dev \
libgflags-dev \
libiberty-dev \
liblz4-dev \
liblzma-dev \
libsnappy-dev \
make \
zlib1g-dev \
binutils-dev \
libjemalloc-dev \
libssl-dev \
pkg-config
```
If advanced debugging functionality is required, use:
```
sudo apt-get install \
libunwind8-dev \
libelf-dev \
libdwarf-dev
```
In the folly directory, run:
```
mkdir _build && cd _build
cmake ..
make -j $(nproc)
make install
```
#### OS X (Homebrew)
folly is available as a Formula and releases may be built via `brew install folly`.
You may also use `folly/build/bootstrap-osx-homebrew.sh` to build against `master`:
```
cd folly
./build/bootstrap-osx-homebrew.sh
```
#### OS X (MacPorts)
Install the required packages from MacPorts:
```
sudo port install \
autoconf \
automake \
boost \
gflags \
git \
google-glog \
libevent \
libtool \
lz4 \
lzma \
scons \
snappy \
zlib
```
Download and install double-conversion:
```
git clone https://github.com/google/double-conversion.git
cd double-conversion
cmake -DBUILD_SHARED_LIBS=ON .
make
sudo make install
```
Download and install folly with the parameters listed below:
```
git clone https://github.com/facebook/folly.git
cd folly/folly
autoreconf -ivf
./configure CPPFLAGS="-I/opt/local/include" LDFLAGS="-L/opt/local/lib"
make
sudo make install
```
#### Windows (Vcpkg)
folly is available in [Vcpkg](https://github.com/Microsoft/vcpkg#vcpkg) and releases may be built via `vcpkg install folly:x64-windows`.
You may also use `vcpkg install folly:x64-windows --head` to build against `master`.
#### Other Linux distributions
- double-conversion (https://github.com/google/double-conversion)
Download and build double-conversion.
You may need to tell cmake where to find it.
[double-conversion/] `ln -s src double-conversion`
[folly/] `mkdir build && cd build`
[folly/build/] `cmake "-DCMAKE_INCLUDE_PATH=$DOUBLE_CONVERSION_HOME/include" "-DCMAKE_LIBRARY_PATH=$DOUBLE_CONVERSION_HOME/lib" ..`
[folly/build/] `make`
- additional platform specific dependencies:
Fedora >= 21 64-bit (last tested on Fedora 28 64-bit)
- gcc
- gcc-c++
- cmake
- automake
- boost-devel
- libtool
- lz4-devel
- lzma-devel
- snappy-devel
- zlib-devel
- glog-devel
- gflags-devel
- scons
- double-conversion-devel
- openssl-devel
- libevent-devel
Optional
- libdwarf-dev
- libelf-dev
- libunwind8-dev

Просмотреть файл

@ -1,3 +0,0 @@
reviewerCountPolicy:
minimumApproverCount: 1
creatorVoteCounts: false

Просмотреть файл

@ -1,14 +0,0 @@
This directory contains `fbcode_builder` configuration and scripts.
Note that the `folly/build` subdirectory also contains some additional build
scripts for other platforms.
## Building using `fbcode_builder`
`fbcode_builder` is a small tool shared by several Facebook projects to help
drive continuous integration builds for our open source repositories. Its
files are in `folly/fbcode_builder` (on Github) or in
`fbcode/opensource/fbcode_builder` (inside Facebook's repo).
Start with the READMEs in the `fbcode_builder` directory.
`./fbcode_builder_config.py` contains the project-specific configuration.

3
Folly/build/fbcode_builder/.gitignore поставляемый
Просмотреть файл

@ -1,3 +0,0 @@
# Facebook-internal CI builds don't have write permission outside of the
# source tree, so we install all projects into this directory.
/facebook_ci

Просмотреть файл

@ -1,44 +0,0 @@
## Debugging Docker builds
To debug a a build failure, start up a shell inside the just-failed image as
follows:
```
docker ps -a | head # Grab the container ID
docker commit CONTAINER_ID # Grab the SHA string
docker run -it SHA_STRING /bin/bash
# Debug as usual, e.g. `./run-cmake.sh Debug`, `make`, `apt-get install gdb`
```
## A note on Docker security
While the Dockerfile generated above is quite simple, you must be aware that
using Docker to run arbitrary code can present significant security risks:
- Code signature validation is off by default (as of 2016), exposing you to
man-in-the-middle malicious code injection.
- You implicitly trust the world -- a Dockerfile cannot annotate that
you trust the image `debian:8.6` because you trust a particular
certificate -- rather, you trust the name, and that it will never be
hijacked.
- Sandboxing in the Linux kernel is not perfect, and the builds run code as
root. Any compromised code can likely escalate to the host system.
Specifically, you must be very careful only to add trusted OS images to the
build flow.
Consider setting this variable before running any Docker container -- this
will validate a signature on the base image before running code from it:
```
export DOCKER_CONTENT_TRUST=1
```
Note that unless you go through the extra steps of notarizing the resulting
images, you will have to disable trust to enter intermediate images, e.g.
```
DOCKER_CONTENT_TRUST= docker run -it YOUR_IMAGE_ID /bin/bash
```

Просмотреть файл

@ -1,60 +0,0 @@
# Easy builds for Facebook projects
This is a Python 2.6+ library designed to simplify continuous-integration
(and other builds) of Facebook projects.
For external Travis builds, the entry point is `travis_docker_build.sh`.
## Using Docker to reproduce a CI build
If you are debugging or enhancing a CI build, you will want to do so from
host or virtual machine that can run a reasonably modern version of Docker:
``` sh
./make_docker_context.py --help # See available options for OS & compiler
# Tiny wrapper that starts a Travis-like build with compile caching:
os_image=ubuntu:16.04 \
gcc_version=5 \
make_parallelism=2 \
travis_cache_dir=~/travis_ccache \
./travis_docker_build.sh &> build_at_$(date +'%Y%m%d_%H%M%S').log
```
**IMPORTANT**: Read `fbcode_builder/README.docker` before diving in!
Setting `travis_cache_dir` turns on [ccache](https://ccache.samba.org/),
saving a fresh copy of `ccache.tgz` after every build. This will invalidate
Docker's layer cache, foring it to rebuild starting right after OS package
setup, but the builds will be fast because all the compiles will be cached.
To iterate without invalidating the Docker layer cache, just `cd
/tmp/docker-context-*` and interact with the `Dockerfile` normally. Note
that the `docker-context-*` dirs preserve a copy of `ccache.tgz` as they
first used it.
# What to read next
The *.py files are fairly well-documented. You might want to peruse them
in this order:
- shell_quoting.py
- fbcode_builder.py
- docker_builder.py
- make_docker_context.py
As far as runs on Travis go, the control flow is:
- .travis.yml calls
- travis_docker_build.sh calls
- docker_build_with_ccache.sh
This library also has an (unpublished) component targeting Facebook's
internal continuous-integration platform using the same build-step DSL.
# Contributing
Please follow the ambient style (or PEP-8), and keep the code Python 2.6
compatible -- since `fbcode_builder`'s only dependency is Docker, we want to
allow building projects on even fairly ancient base systems. We also wish
to be compatible with Python 3, and would appreciate it if you kept that
in mind while making changes also.

Просмотреть файл

@ -1,218 +0,0 @@
#!/bin/bash -uex
set -o pipefail # Be sure to `|| :` commands that are allowed to fail.
#
# Future: port this to Python if you are making significant changes.
#
# Parse command-line arguments
build_timeout="" # Default to no time-out
print_usage() {
echo "Usage: $0 [--build-timeout TIMEOUT_VAL] SAVE-CCACHE-TO-DIR"
echo "SAVE-CCACHE-TO-DIR is required. An empty string discards the ccache."
}
while [[ $# -gt 0 ]]; do
case "$1" in
--build-timeout)
shift
build_timeout="$1"
if [[ "$build_timeout" != "" ]] ; then
timeout "$build_timeout" true # fail early on invalid timeouts
fi
;;
-h|--help)
print_usage
exit
;;
*)
break
;;
esac
shift
done
# There is one required argument, but an empty string is allowed.
if [[ "$#" != 1 ]] ; then
print_usage
exit 1
fi
save_ccache_to_dir="$1"
if [[ "$save_ccache_to_dir" != "" ]] ; then
mkdir -p "$save_ccache_to_dir" # fail early if there's nowhere to save
else
echo "WARNING: Will not save /ccache from inside the Docker container"
fi
rand_guid() {
echo "$(date +%s)_${RANDOM}_${RANDOM}_${RANDOM}_${RANDOM}"
}
id=fbcode_builder_image_id=$(rand_guid)
logfile=$(mktemp)
echo "
Running build with timeout '$build_timeout', label $id, and log in $logfile
"
if [[ "$build_timeout" != "" ]] ; then
# Kill the container after $build_timeout. Using `/bin/timeout` would cause
# Docker to destroy the most recent container and lose its cache.
(
sleep "$build_timeout"
echo "Build timed out after $build_timeout" 1>&2
while true; do
maybe_container=$(
egrep '^( ---> Running in [0-9a-f]+|FBCODE_BUILDER_EXIT)$' "$logfile" |
tail -n 1 | awk '{print $NF}'
)
if [[ "$maybe_container" == "FBCODE_BUILDER_EXIT" ]] ; then
echo "Time-out successfully terminated build" 1>&2
break
fi
echo "Time-out: trying to kill $maybe_container" 1>&2
# This kill fail if we get unlucky, try again soon.
docker kill "$maybe_container" || sleep 5
done
) &
fi
build_exit_code=0
# `docker build` is allowed to fail, and `pipefail` means we must check the
# failure explicitly.
if ! docker build --label="$id" . 2>&1 | tee "$logfile" ; then
build_exit_code="${PIPESTATUS[0]}"
# NB: We are going to deliberately forge ahead even if `tee` failed.
# If it did, we have a problem with tempfile creation, and all is sad.
echo "Build failed with code $build_exit_code, trying to save ccache" 1>&2
fi
# Stop trying to kill the container.
echo $'\nFBCODE_BUILDER_EXIT' >> "$logfile"
if [[ "$save_ccache_to_dir" == "" ]] ; then
echo "Not inspecting Docker build, since saving the ccache wasn't requested."
exit "$build_exit_code"
fi
img=$(docker images --filter "label=$id" -a -q)
if [[ "$img" == "" ]] ; then
docker images -a
echo "In the above list, failed to find most recent image with $id" 1>&2
# Usually, the above `docker kill` will leave us with an up-to-the-second
# container, from which we can extract the cache. However, if that fails
# for any reason, this loop will instead grab the latest available image.
#
# It's possible for this log search to get confused due to the output of
# the build command itself, but since our builds aren't **trying** to
# break cache, we probably won't randomly hit an ID from another build.
img=$(
egrep '^ ---> (Running in [0-9a-f]+|[0-9a-f]+)$' "$logfile" | tac |
sed 's/Running in /container_/;s/ ---> //;' | (
while read -r x ; do
# Both docker commands below print an image ID to stdout on
# success, so we just need to know when to stop.
if [[ "$x" =~ container_.* ]] ; then
if docker commit "${x#container_}" ; then
break
fi
elif docker inspect --type image -f '{{.Id}}' "$x" ; then
break
fi
done
)
)
if [[ "$img" == "" ]] ; then
echo "Failed to find valid container or image ID in log $logfile" 1>&2
exit 1
fi
elif [[ "$(echo "$img" | wc -l)" != 1 ]] ; then
# Shouldn't really happen, but be explicit if it does.
echo "Multiple images with label $id, taking the latest of:"
echo "$img"
img=$(echo "$img" | head -n 1)
fi
container_name="fbcode_builder_container_$(rand_guid)"
echo "Starting $container_name from latest image of the build with $id --"
echo "$img"
# ccache collection must be done outside of the Docker build steps because
# we need to be able to kill it on timeout.
#
# This step grows the max cache size to slightly exceed than the working set
# of a successful build. This simple design persists the max size in the
# cache directory itself (the env var CCACHE_MAXSIZE does not even work with
# older ccaches like the one on 14.04).
#
# Future: copy this script into the Docker image via Dockerfile.
(
# By default, fbcode_builder creates an unsigned image, so the `docker
# run` below would fail if DOCKER_CONTENT_TRUST were set. So we unset it
# just for this one run.
export DOCKER_CONTENT_TRUST=
# CAUTION: The inner bash runs without -uex, so code accordingly.
docker run --user root --name "$container_name" "$img" /bin/bash -c '
build_exit_code='"$build_exit_code"'
# Might be useful if debugging whether max cache size is too small?
grep " Cleaning up cache directory " /tmp/ccache.log
export CCACHE_DIR=/ccache
ccache -s
echo "Total bytes in /ccache:";
total_bytes=$(du -sb /ccache | awk "{print \$1}")
echo "$total_bytes"
echo "Used bytes in /ccache:";
used_bytes=$(
du -sb $(find /ccache -type f -newermt @$(
cat /FBCODE_BUILDER_CCACHE_START_TIME
)) | awk "{t += \$1} END {print t}"
)
echo "$used_bytes"
# Goal: set the max cache to 750MB over 125% of the usage of a
# successful build. If this is too small, it takes too long to get a
# cache fully warmed up. Plus, ccache cleans 100-200MB before reaching
# the max cache size, so a large margin is essential to prevent misses.
desired_mb=$(( 750 + used_bytes / 800000 )) # 125% in decimal MB: 1e6/1.25
if [[ "$build_exit_code" != "0" ]] ; then
# For a bad build, disallow shrinking the max cache size. Instead of
# the max cache size, we use on-disk size, which ccache keeps at least
# 150MB under the actual max size, hence the 400MB safety margin.
cur_max_mb=$(( 400 + total_bytes / 1000000 )) # ccache uses decimal MB
if [[ "$desired_mb" -le "$cur_max_mb" ]] ; then
desired_mb=""
fi
fi
if [[ "$desired_mb" != "" ]] ; then
echo "Updating cache size to $desired_mb MB"
ccache -M "${desired_mb}M"
ccache -s
fi
# Subshell because `time` the binary may not be installed.
if (time tar czf /ccache.tgz /ccache) ; then
ls -l /ccache.tgz
else
# This `else` ensures we never overwrite the current cache with
# partial data in case of error, even if somebody adds code below.
rm /ccache.tgz
exit 1
fi
'
)
echo "Updating $save_ccache_to_dir/ccache.tgz"
# This will not delete the existing cache if `docker run` didn't make one
docker cp "$container_name:/ccache.tgz" "$save_ccache_to_dir/"
# Future: it'd be nice if Travis allowed us to retry if the build timed out,
# since we'll make more progress thanks to the cache. As-is, we have to
# wait for the next commit to land.
echo "Build exited with code $build_exit_code"
exit "$build_exit_code"

Просмотреть файл

@ -1,169 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'''
Extends FBCodeBuilder to produce Docker context directories.
In order to get the largest iteration-time savings from Docker's build
caching, you will want to:
- Use fine-grained steps as appropriate (e.g. separate make & make install),
- Start your action sequence with the lowest-risk steps, and with the steps
that change the least often, and
- Put the steps that you are debugging towards the very end.
'''
import logging
import os
import shutil
import tempfile
from fbcode_builder import FBCodeBuilder
from shell_quoting import (
raw_shell, shell_comment, shell_join, ShellQuoted
)
from utils import recursively_flatten_list, run_command
class DockerFBCodeBuilder(FBCodeBuilder):
def _user(self):
return self.option('user', 'root')
def _change_user(self):
return ShellQuoted('USER {u}').format(u=self._user())
def setup(self):
# Please add RPM-based OSes here as appropriate.
#
# To allow exercising non-root installs -- we change users after the
# system packages are installed. TODO: For users not defined in the
# image, we should probably `useradd`.
return self.step('Setup', [
# Docker's FROM does not understand shell quoting.
ShellQuoted('FROM {}'.format(self.option('os_image'))),
# /bin/sh syntax is a pain
ShellQuoted('SHELL ["/bin/bash", "-c"]'),
] + self.install_debian_deps() + [self._change_user()])
def step(self, name, actions):
assert '\n' not in name, 'Name {0} would span > 1 line'.format(name)
b = ShellQuoted('')
return [ShellQuoted('### {0} ###'.format(name)), b] + actions + [b]
def run(self, shell_cmd):
return ShellQuoted('RUN {cmd}').format(cmd=shell_cmd)
def workdir(self, dir):
return [
# As late as Docker 1.12.5, this results in `build` being owned
# by root:root -- the explicit `mkdir` works around the bug:
# USER nobody
# WORKDIR build
ShellQuoted('USER root'),
ShellQuoted('RUN mkdir -p {d} && chown {u} {d}').format(
d=dir, u=self._user()
),
self._change_user(),
ShellQuoted('WORKDIR {dir}').format(dir=dir),
]
def comment(self, comment):
# This should not be a command since we don't want comment changes
# to invalidate the Docker build cache.
return shell_comment(comment)
def copy_local_repo(self, repo_dir, dest_name):
fd, archive_path = tempfile.mkstemp(
prefix='local_repo_{0}_'.format(dest_name),
suffix='.tgz',
dir=os.path.abspath(self.option('docker_context_dir')),
)
os.close(fd)
run_command('tar', 'czf', archive_path, '.', cwd=repo_dir)
return [
ShellQuoted('ADD {archive} {dest_name}').format(
archive=os.path.basename(archive_path), dest_name=dest_name
),
# Docker permissions make very little sense... see also workdir()
ShellQuoted('USER root'),
ShellQuoted('RUN chown -R {u} {d}').format(
d=dest_name, u=self._user()
),
self._change_user(),
]
def _render_impl(self, steps):
return raw_shell(shell_join('\n', recursively_flatten_list(steps)))
def debian_ccache_setup_steps(self):
source_ccache_tgz = self.option('ccache_tgz', '')
if not source_ccache_tgz:
logging.info('Docker ccache not enabled')
return []
dest_ccache_tgz = os.path.join(
self.option('docker_context_dir'), 'ccache.tgz'
)
try:
try:
os.link(source_ccache_tgz, dest_ccache_tgz)
except OSError:
logging.exception(
'Hard-linking {s} to {d} failed, falling back to copy'
.format(s=source_ccache_tgz, d=dest_ccache_tgz)
)
shutil.copyfile(source_ccache_tgz, dest_ccache_tgz)
except Exception:
logging.exception(
'Failed to copy or link {s} to {d}, aborting'
.format(s=source_ccache_tgz, d=dest_ccache_tgz)
)
raise
return [
# Separate layer so that in development we avoid re-downloads.
self.run(ShellQuoted('apt-get install -yq ccache')),
ShellQuoted('ADD ccache.tgz /'),
ShellQuoted(
# Set CCACHE_DIR before the `ccache` invocations below.
'ENV CCACHE_DIR=/ccache '
# No clang support for now, so it's easiest to hardcode gcc.
'CC="ccache gcc" CXX="ccache g++" '
# Always log for ease of debugging. For real FB projects,
# this log is several megabytes, so dumping it to stdout
# would likely exceed the Travis log limit of 4MB.
#
# On a local machine, `docker cp` will get you the data. To
# get the data out from Travis, I would compress and dump
# uuencoded bytes to the log -- for Bistro this was about
# 600kb or 8000 lines:
#
# apt-get install sharutils
# bzip2 -9 < /tmp/ccache.log | uuencode -m ccache.log.bz2
'CCACHE_LOGFILE=/tmp/ccache.log'
),
self.run(ShellQuoted(
# Future: Skipping this part made this Docker step instant,
# saving ~1min of build time. It's unclear if it is the
# chown or the du, but probably the chown -- since a large
# part of the cost is incurred at image save time.
#
# ccache.tgz may be empty, or may have the wrong
# permissions.
'mkdir -p /ccache && time chown -R nobody /ccache && '
'time du -sh /ccache && '
# Reset stats so `docker_build_with_ccache.sh` can print
# useful values at the end of the run.
'echo === Prev run stats === && ccache -s && ccache -z && '
# Record the current time to let travis_build.sh figure out
# the number of bytes in the cache that are actually used --
# this is crucial for tuning the maximum cache size.
'date +%s > /FBCODE_BUILDER_CCACHE_START_TIME && '
# The build running as `nobody` should be able to write here
'chown nobody /tmp/ccache.log'
)),
]

Просмотреть файл

@ -1,368 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'''
This is a small DSL to describe builds of Facebook's open-source projects
that are published to Github from a single internal repo, including projects
that depend on folly, wangle, proxygen, fbthrift, etc.
This file defines the interface of the DSL, and common utilieis, but you
will have to instantiate a specific builder, with specific options, in
order to get work done -- see e.g. make_docker_context.py.
== Design notes ==
Goals:
- A simple declarative language for what needs to be checked out & built,
how, in what order.
- The same specification should work for external continuous integration
builds (e.g. Travis + Docker) and for internal VM-based continuous
integration builds.
- One should be able to build without root, and to install to a prefix.
Non-goals:
- General usefulness. The only point of this is to make it easier to build
and test Facebook's open-source services.
Ideas for the future -- these may not be very good :)
- Especially on Ubuntu 14.04 the current initial setup is inefficient:
we add PPAs after having installed a bunch of packages -- this prompts
reinstalls of large amounts of code. We also `apt-get update` a few
times.
- A "shell script" builder. Like DockerFBCodeBuilder, but outputs a
shell script that runs outside of a container. Or maybe even
synchronously executes the shell commands, `make`-style.
- A "Makefile" generator. That might make iterating on builds even quicker
than what you can currently get with Docker build caching.
- Generate a rebuild script that can be run e.g. inside the built Docker
container by tagging certain steps with list-inheriting Python objects:
* do change directories
* do NOT `git clone` -- if we want to update code this should be a
separate script that e.g. runs rebase on top of specific targets
across all the repos.
* do NOT install software (most / all setup can be skipped)
* do NOT `autoreconf` or `configure`
* do `make` and `cmake`
- If we get non-Debian OSes, part of ccache setup should be factored out.
'''
import os
import re
from shell_quoting import path_join, shell_join, ShellQuoted
def _read_project_github_hashes():
base_dir = 'deps/github_hashes/' # trailing slash used in regex below
for dirname, _, files in os.walk(base_dir):
for filename in files:
path = os.path.join(dirname, filename)
with open(path) as f:
m_proj = re.match('^' + base_dir + '(.*)-rev\.txt$', path)
if m_proj is None:
raise RuntimeError('Not a hash file? {0}'.format(path))
m_hash = re.match('^Subproject commit ([0-9a-f]+)\n$', f.read())
if m_hash is None:
raise RuntimeError('No hash in {0}'.format(path))
yield m_proj.group(1), m_hash.group(1)
class FBCodeBuilder(object):
def __init__(self, **kwargs):
self._options_do_not_access = kwargs # Use .option() instead.
# This raises upon detecting options that are specified but unused,
# because otherwise it is very easy to make a typo in option names.
self.options_used = set()
self._github_hashes = dict(_read_project_github_hashes())
def __repr__(self):
return '{0}({1})'.format(
self.__class__.__name__,
', '.join(
'{0}={1}'.format(k, repr(v))
for k, v in self._options_do_not_access.items()
)
)
def option(self, name, default=None):
value = self._options_do_not_access.get(name, default)
if value is None:
raise RuntimeError('Option {0} is required'.format(name))
self.options_used.add(name)
return value
def has_option(self, name):
return name in self._options_do_not_access
def add_option(self, name, value):
if name in self._options_do_not_access:
raise RuntimeError('Option {0} already set'.format(name))
self._options_do_not_access[name] = value
#
# Abstract parts common to every installation flow
#
def render(self, steps):
'''
Converts nested actions to your builder's expected output format.
Typically takes the output of build().
'''
res = self._render_impl(steps) # Implementation-dependent
# Now that the output is rendered, we expect all options to have
# been used.
unused_options = set(self._options_do_not_access)
unused_options -= self.options_used
if unused_options:
raise RuntimeError(
'Unused options: {0} -- please check if you made a typo '
'in any of them. Those that are truly not useful should '
'be not be set so that this typo detection can be useful.'
.format(unused_options)
)
return res
def build(self, steps):
if not steps:
raise RuntimeError('Please ensure that the config you are passing '
'contains steps')
return [self.setup(), self.diagnostics()] + steps
def setup(self):
'Your builder may want to install packages here.'
raise NotImplementedError
def diagnostics(self):
'Log some system diagnostics before/after setup for ease of debugging'
# The builder's repr is not used in a command to avoid pointlessly
# invalidating Docker's build cache.
return self.step('Diagnostics', [
self.comment('Builder {0}'.format(repr(self))),
self.run(ShellQuoted('hostname')),
self.run(ShellQuoted('cat /etc/issue || echo no /etc/issue')),
self.run(ShellQuoted('g++ --version || echo g++ not installed')),
self.run(ShellQuoted('cmake --version || echo cmake not installed')),
])
def step(self, name, actions):
'A labeled collection of actions or other steps'
raise NotImplementedError
def run(self, shell_cmd):
'Run this bash command'
raise NotImplementedError
def workdir(self, dir):
'Create this directory if it does not exist, and change into it'
raise NotImplementedError
def copy_local_repo(self, dir, dest_name):
'''
Copy the local repo at `dir` into this step's `workdir()`, analog of:
cp -r /path/to/folly folly
'''
raise NotImplementedError
def debian_deps(self):
return [
'autoconf-archive',
'bison',
'build-essential',
'cmake',
'curl',
'flex',
'git',
'gperf',
'joe',
'libboost-all-dev',
'libcap-dev',
'libdouble-conversion-dev',
'libevent-dev',
'libgflags-dev',
'libgoogle-glog-dev',
'libkrb5-dev',
'libpcre3-dev',
'libpthread-stubs0-dev',
'libnuma-dev',
'libsasl2-dev',
'libsnappy-dev',
'libsqlite3-dev',
'libssl-dev',
'libtool',
'netcat-openbsd',
'pkg-config',
'sudo',
'unzip',
'wget',
]
#
# Specific build helpers
#
def install_debian_deps(self):
actions = [
self.run(
ShellQuoted('apt-get update && apt-get install -yq {deps}').format(
deps=shell_join(' ', (
ShellQuoted(dep) for dep in self.debian_deps())))
),
]
gcc_version = self.option('gcc_version')
# Make the selected GCC the default before building anything
actions.extend([
self.run(ShellQuoted('apt-get install -yq {c} {cpp}').format(
c=ShellQuoted('gcc-{v}').format(v=gcc_version),
cpp=ShellQuoted('g++-{v}').format(v=gcc_version),
)),
self.run(ShellQuoted(
'update-alternatives --install /usr/bin/gcc gcc {c} 40 '
'--slave /usr/bin/g++ g++ {cpp}'
).format(
c=ShellQuoted('/usr/bin/gcc-{v}').format(v=gcc_version),
cpp=ShellQuoted('/usr/bin/g++-{v}').format(v=gcc_version),
)),
self.run(ShellQuoted('update-alternatives --config gcc')),
])
actions.extend(self.debian_ccache_setup_steps())
return self.step('Install packages for Debian-based OS', actions)
def debian_ccache_setup_steps(self):
return [] # It's ok to ship a renderer without ccache support.
def github_project_workdir(self, project, path):
# Only check out a non-default branch if requested. This especially
# makes sense when building from a local repo.
git_hash = self.option(
'{0}:git_hash'.format(project),
# Any repo that has a hash in deps/github_hashes defaults to
# that, with the goal of making builds maximally consistent.
self._github_hashes.get(project, '')
)
maybe_change_branch = [
self.run(ShellQuoted('git checkout {hash}').format(hash=git_hash)),
] if git_hash else []
base_dir = self.option('projects_dir')
local_repo_dir = self.option('{0}:local_repo_dir'.format(project), '')
return self.step('Check out {0}, workdir {1}'.format(project, path), [
self.workdir(base_dir),
self.run(
ShellQuoted('git clone https://github.com/{p}').format(p=project)
) if not local_repo_dir else self.copy_local_repo(
local_repo_dir, os.path.basename(project)
),
self.workdir(path_join(base_dir, os.path.basename(project), path)),
] + maybe_change_branch)
def fb_github_project_workdir(self, project_and_path, github_org='facebook'):
'This helper lets Facebook-internal CI special-cases FB projects'
project, path = project_and_path.split('/', 1)
return self.github_project_workdir(github_org + '/' + project, path)
def _make_vars(self, make_vars):
return shell_join(' ', (
ShellQuoted('{k}={v}').format(k=k, v=v)
for k, v in ({} if make_vars is None else make_vars).items()
))
def parallel_make(self, make_vars=None):
return self.run(ShellQuoted('make -j {n} {vars}').format(
n=self.option('make_parallelism'),
vars=self._make_vars(make_vars),
))
def make_and_install(self, make_vars=None):
return [
self.parallel_make(make_vars),
self.run(ShellQuoted('make install {vars}').format(
vars=self._make_vars(make_vars),
)),
]
def configure(self, name=None):
autoconf_options = {}
if name is not None:
autoconf_options.update(
self.option('{0}:autoconf_options'.format(name), {})
)
return [
self.run(ShellQuoted(
'LDFLAGS="$LDFLAGS -L"{p}"/lib -Wl,-rpath="{p}"/lib" '
'CFLAGS="$CFLAGS -I"{p}"/include" '
'CPPFLAGS="$CPPFLAGS -I"{p}"/include" '
'PY_PREFIX={p} '
'./configure --prefix={p} {args}'
).format(
p=self.option('prefix'),
args=shell_join(' ', (
ShellQuoted('{k}={v}').format(k=k, v=v)
for k, v in autoconf_options.items()
)),
)),
]
def autoconf_install(self, name):
return self.step('Build and install {0}'.format(name), [
self.run(ShellQuoted('autoreconf -ivf')),
] + self.configure() + self.make_and_install())
def cmake_configure(self, name, cmake_path='..'):
cmake_defines = {
'BUILD_SHARED_LIBS': 'ON',
'CMAKE_INSTALL_PREFIX': self.option('prefix'),
}
cmake_defines.update(
self.option('{0}:cmake_defines'.format(name), {})
)
return [
self.run(ShellQuoted(
'CXXFLAGS="$CXXFLAGS -fPIC -isystem "{p}"/include" '
'CFLAGS="$CFLAGS -fPIC -isystem "{p}"/include" '
'cmake {args} {cmake_path}'
).format(
p=self.option('prefix'),
args=shell_join(' ', (
ShellQuoted('-D{k}={v}').format(k=k, v=v)
for k, v in cmake_defines.items()
)),
cmake_path=cmake_path,
)),
]
def cmake_install(self, name, cmake_path='..'):
return self.step(
'Build and install {0}'.format(name),
self.cmake_configure(name, cmake_path) + self.make_and_install()
)
def fb_github_autoconf_install(self, project_and_path, github_org='facebook'):
return [
self.fb_github_project_workdir(project_and_path, github_org),
self.autoconf_install(project_and_path),
]
def fb_github_cmake_install(self, project_and_path, cmake_path='..', github_org='facebook'):
return [
self.fb_github_project_workdir(project_and_path, github_org),
self.cmake_install(project_and_path, cmake_path),
]

Просмотреть файл

@ -1,14 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'Demo config, so that `make_docker_context.py --help` works in this directory.'
config = {
'fbcode_builder_spec': lambda _builder: {
'depends_on': [],
'steps': [],
},
'github_project': 'demo/project',
}

Просмотреть файл

@ -1,174 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'''
Reads `fbcode_builder_config.py` from the current directory, and prepares a
Docker context directory to build this project. Prints to stdout the path
to the context directory.
Try `.../make_docker_context.py --help` from a project's `build/` directory.
By default, the Docker context directory will be in /tmp. It will always
contain a Dockerfile, and might also contain copies of your local repos, and
other data needed for the build container.
'''
import os
import tempfile
import textwrap
from docker_builder import DockerFBCodeBuilder
from parse_args import parse_args_to_fbcode_builder_opts
def make_docker_context(
get_steps_fn, github_project, opts=None, default_context_dir=None
):
'''
Returns a path to the Docker context directory. See parse_args.py.
Helper for making a command-line utility that writes your project's
Dockerfile and associated data into a (temporary) directory. Your main
program might look something like this:
print(make_docker_context(
lambda builder: [builder.step(...), ...],
'facebook/your_project',
))
'''
if opts is None:
opts = {}
valid_versions = (
('ubuntu:16.04', '5'),
)
def add_args(parser):
parser.add_argument(
'--docker-context-dir', metavar='DIR',
default=default_context_dir,
help='Write the Dockerfile and its context into this directory. '
'If empty, make a temporary directory. Default: %(default)s.',
)
parser.add_argument(
'--user', metavar='NAME', default=opts.get('user', 'nobody'),
help='Build and install as this user. Default: %(default)s.',
)
parser.add_argument(
'--prefix', metavar='DIR',
default=opts.get('prefix', '/home/install'),
help='Install all libraries in this prefix. Default: %(default)s.',
)
parser.add_argument(
'--projects-dir', metavar='DIR',
default=opts.get('projects_dir', '/home'),
help='Place project code directories here. Default: %(default)s.',
)
parser.add_argument(
'--os-image', metavar='IMG', choices=zip(*valid_versions)[0],
default=opts.get('os_image', valid_versions[0][0]),
help='Docker OS image -- be sure to use only ones you trust (See '
'README.docker). Choices: %(choices)s. Default: %(default)s.',
)
parser.add_argument(
'--gcc-version', metavar='VER',
choices=set(zip(*valid_versions)[1]),
default=opts.get('gcc_version', valid_versions[0][1]),
help='Choices: %(choices)s. Default: %(default)s.',
)
parser.add_argument(
'--make-parallelism', metavar='NUM', type=int,
default=opts.get('make_parallelism', 1),
help='Use `make -j` on multi-CPU systems with lots of RAM. '
'Default: %(default)s.',
)
parser.add_argument(
'--local-repo-dir', metavar='DIR',
help='If set, build {0} from a local directory instead of Github.'
.format(github_project),
)
parser.add_argument(
'--ccache-tgz', metavar='PATH',
help='If set, enable ccache for the build. To initialize the '
'cache, first try to hardlink, then to copy --cache-tgz '
'as ccache.tgz into the --docker-context-dir.'
)
opts = parse_args_to_fbcode_builder_opts(
add_args,
# These have add_argument() calls, others are set via --option.
(
'docker_context_dir',
'user',
'prefix',
'projects_dir',
'os_image',
'gcc_version',
'make_parallelism',
'local_repo_dir',
'ccache_tgz',
),
opts,
help=textwrap.dedent('''
Reads `fbcode_builder_config.py` from the current directory, and
prepares a Docker context directory to build {github_project} and
its dependencies. Prints to stdout the path to the context
directory.
Pass --option {github_project}:git_hash SHA1 to build something
other than the master branch from Github.
Or, pass --option {github_project}:local_repo_dir LOCAL_PATH to
build from a local repo instead of cloning from Github.
Usage:
(cd $(./make_docker_context.py) && docker build . 2>&1 | tee log)
'''.format(github_project=github_project)),
)
# This allows travis_docker_build.sh not to know the main Github project.
local_repo_dir = opts.pop('local_repo_dir', None)
if local_repo_dir is not None:
opts['{0}:local_repo_dir'.format(github_project)] = local_repo_dir
if (opts.get('os_image'), opts.get('gcc_version')) not in valid_versions:
raise Exception(
'Due to 4/5 ABI changes (std::string), we can only use {0}'.format(
' / '.join('GCC {1} on {0}'.format(*p) for p in valid_versions)
)
)
if opts.get('docker_context_dir') is None:
opts['docker_context_dir'] = tempfile.mkdtemp(prefix='docker-context-')
elif not os.path.exists(opts.get('docker_context_dir')):
os.makedirs(opts.get('docker_context_dir'))
builder = DockerFBCodeBuilder(**opts)
context_dir = builder.option('docker_context_dir') # Mark option "in-use"
# The renderer may also populate some files into the context_dir.
dockerfile = builder.render(get_steps_fn(builder))
with os.fdopen(os.open(
os.path.join(context_dir, 'Dockerfile'),
os.O_RDWR | os.O_CREAT | os.O_EXCL, # Do not overwrite existing files
0o644,
), 'w') as f:
f.write(dockerfile)
return context_dir
if __name__ == '__main__':
from utils import read_fbcode_builder_config, build_fbcode_builder_config
# Load a spec from the current directory
config = read_fbcode_builder_config('fbcode_builder_config.py')
print(make_docker_context(
build_fbcode_builder_config(config),
config['github_project'],
))

Просмотреть файл

@ -1,82 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'Argument parsing logic shared by all fbcode_builder CLI tools.'
import argparse
import logging
from shell_quoting import raw_shell, ShellQuoted
def parse_args_to_fbcode_builder_opts(add_args_fn, top_level_opts, opts, help):
'''
Provides some standard arguments: --debug, --option, --shell-quoted-option
Then, calls `add_args_fn(parser)` to add application-specific arguments.
`opts` are first used as defaults for the various command-line
arguments. Then, the parsed arguments are mapped back into `opts`,
which then become the values for `FBCodeBuilder.option()`, to be used
both by the builder and by `get_steps_fn()`.
`help` is printed in response to the `--help` argument.
'''
top_level_opts = set(top_level_opts)
parser = argparse.ArgumentParser(
description=help,
formatter_class=argparse.RawDescriptionHelpFormatter
)
add_args_fn(parser)
parser.add_argument(
'--option', nargs=2, metavar=('KEY', 'VALUE'), action='append',
default=[
(k, v) for k, v in opts.items()
if k not in top_level_opts and not isinstance(v, ShellQuoted)
],
help='Set project-specific options. These are assumed to be raw '
'strings, to be shell-escaped as needed. Default: %(default)s.',
)
parser.add_argument(
'--shell-quoted-option', nargs=2, metavar=('KEY', 'VALUE'),
action='append',
default=[
(k, raw_shell(v)) for k, v in opts.items()
if k not in top_level_opts and isinstance(v, ShellQuoted)
],
help='Set project-specific options. These are assumed to be shell-'
'quoted, and may be used in commands as-is. Default: %(default)s.',
)
parser.add_argument('--debug', action='store_true', help='Log more')
args = parser.parse_args()
logging.basicConfig(
level=logging.DEBUG if args.debug else logging.INFO,
format='%(levelname)s: %(message)s'
)
# Map command-line args back into opts.
logging.debug('opts before command-line arguments: {0}'.format(opts))
new_opts = {}
for key in top_level_opts:
val = getattr(args, key)
# Allow clients to unset a default by passing a value of None in opts
if val is not None:
new_opts[key] = val
for key, val in args.option:
new_opts[key] = val
for key, val in args.shell_quoted_option:
new_opts[key] = ShellQuoted(val)
logging.debug('opts after command-line arguments: {0}'.format(new_opts))
return new_opts

Просмотреть файл

@ -1,111 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'''
shell_builder.py allows running the fbcode_builder logic
on the host rather than in a container.
It emits a bash script with set -exo pipefail configured such that
any failing step will cause the script to exit with failure.
== How to run it? ==
cd build
python fbcode_builder/shell_builder.py > ~/run.sh
bash ~/run.sh
'''
import os
import distutils.spawn
from fbcode_builder import FBCodeBuilder
from shell_quoting import (
raw_shell, shell_comment, shell_join, ShellQuoted
)
from utils import recursively_flatten_list
class ShellFBCodeBuilder(FBCodeBuilder):
def _render_impl(self, steps):
return raw_shell(shell_join('\n', recursively_flatten_list(steps)))
def workdir(self, dir):
return [
ShellQuoted('mkdir -p {d} && cd {d}').format(
d=dir
),
]
def run(self, shell_cmd):
return ShellQuoted('{cmd}').format(cmd=shell_cmd)
def step(self, name, actions):
assert '\n' not in name, 'Name {0} would span > 1 line'.format(name)
b = ShellQuoted('')
return [ShellQuoted('### {0} ###'.format(name)), b] + actions + [b]
def setup(self):
steps = [
ShellQuoted('set -exo pipefail'),
]
if self.has_option('ccache_dir'):
ccache_dir = self.option('ccache_dir')
steps += [
ShellQuoted(
# Set CCACHE_DIR before the `ccache` invocations below.
'export CCACHE_DIR={ccache_dir} '
'CC="ccache ${{CC:-gcc}}" CXX="ccache ${{CXX:-g++}}"'
).format(ccache_dir=ccache_dir)
]
return steps
def comment(self, comment):
return shell_comment(comment)
def copy_local_repo(self, dir, dest_name):
return [
ShellQuoted('cp -r {dir} {dest_name}').format(
dir=dir,
dest_name=dest_name
),
]
def find_project_root():
here = os.path.dirname(os.path.realpath(__file__))
maybe_root = os.path.dirname(os.path.dirname(here))
if os.path.isdir(os.path.join(maybe_root, '.git')):
return maybe_root
raise RuntimeError(
"I expected shell_builder.py to be in the "
"build/fbcode_builder subdir of a git repo")
def persistent_temp_dir(repo_root):
escaped = repo_root.replace('/', 'sZs').replace('\\', 'sZs').replace(':', '')
return os.path.join(os.path.expandvars("$HOME"), '.fbcode_builder-' + escaped)
if __name__ == '__main__':
from utils import read_fbcode_builder_config, build_fbcode_builder_config
repo_root = find_project_root()
temp = persistent_temp_dir(repo_root)
config = read_fbcode_builder_config('fbcode_builder_config.py')
builder = ShellFBCodeBuilder()
builder.add_option('projects_dir', temp)
if distutils.spawn.find_executable('ccache'):
builder.add_option('ccache_dir',
os.environ.get('CCACHE_DIR', os.path.join(temp, '.ccache')))
builder.add_option('prefix', os.path.join(temp, 'installed'))
builder.add_option('make_parallelism', 4)
builder.add_option(
'{project}:local_repo_dir'.format(project=config['github_project']),
repo_root)
make_steps = build_fbcode_builder_config(config)
steps = make_steps(builder)
print(builder.render(steps))

Просмотреть файл

@ -1,98 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'''
Almost every FBCodeBuilder string is ultimately passed to a shell. Escaping
too little or too much tends to be the most common error. The utilities in
this file give a systematic way of avoiding such bugs:
- When you write literal strings destined for the shell, use `ShellQuoted`.
- When these literal strings are parameterized, use `ShellQuoted.format`.
- Any parameters that are raw strings get `shell_quote`d automatically,
while any ShellQuoted parameters will be left intact.
- Use `path_join` to join path components.
- Use `shell_join` to join already-quoted command arguments or shell lines.
'''
import os
from collections import namedtuple
class ShellQuoted(namedtuple('ShellQuoted', ('do_not_use_raw_str',))):
'''
Wrap a string with this to make it transparent to shell_quote(). It
will almost always suffice to use ShellQuoted.format(), path_join(),
or shell_join().
If you really must, use raw_shell() to access the raw string.
'''
def __new__(cls, s):
'No need to nest ShellQuoted.'
return super(ShellQuoted, cls).__new__(
cls, s.do_not_use_raw_str if isinstance(s, ShellQuoted) else s
)
def __str__(self):
raise RuntimeError(
'One does not simply convert {0} to a string -- use path_join() '
'or ShellQuoted.format() instead'.format(repr(self))
)
def __repr__(self):
return '{0}({1})'.format(
self.__class__.__name__, repr(self.do_not_use_raw_str)
)
def format(self, **kwargs):
'''
Use instead of str.format() when the arguments are either
`ShellQuoted()` or raw strings needing to be `shell_quote()`d.
Positional args are deliberately not supported since they are more
error-prone.
'''
return ShellQuoted(self.do_not_use_raw_str.format(**dict(
(k, shell_quote(v).do_not_use_raw_str) for k, v in kwargs.items()
)))
def shell_quote(s):
'Quotes a string if it is not already quoted'
return s if isinstance(s, ShellQuoted) \
else ShellQuoted("'" + str(s).replace("'", "'\\''") + "'")
def raw_shell(s):
'Not a member of ShellQuoted so we get a useful error for raw strings'
if isinstance(s, ShellQuoted):
return s.do_not_use_raw_str
raise RuntimeError('{0} should have been ShellQuoted'.format(s))
def shell_join(delim, it):
'Joins an iterable of ShellQuoted with a delimiter between each two'
return ShellQuoted(delim.join(raw_shell(s) for s in it))
def path_join(*args):
'Joins ShellQuoted and raw pieces of paths to make a shell-quoted path'
return ShellQuoted(os.path.join(*[
raw_shell(shell_quote(s)) for s in args
]))
def shell_comment(c):
'Do not shell-escape raw strings in comments, but do handle line breaks.'
return ShellQuoted('# {c}').format(c=ShellQuoted(
(raw_shell(c) if isinstance(c, ShellQuoted) else c)
.replace('\n', '\n# ')
))

Просмотреть файл

@ -1,43 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import specs.folly as folly
import specs.fizz as fizz
import specs.sodium as sodium
import specs.wangle as wangle
import specs.zstd as zstd
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
# This API should change rarely, so build the latest tag instead of master.
builder.add_option(
'no1msd/mstch:git_hash',
ShellQuoted('$(git describe --abbrev=0 --tags)')
)
builder.add_option(
'rsocket/rsocket-cpp/yarpl/build:cmake_defines', {'BUILD_TESTS': 'OFF'}
)
builder.add_option('krb5/krb5:git_hash', 'krb5-1.16.1-final')
return {
'depends_on': [folly, fizz, sodium, wangle, zstd],
'steps': [
# This isn't a separete spec, since only fbthrift uses mstch.
builder.github_project_workdir('no1msd/mstch', 'build'),
builder.cmake_install('no1msd/mstch'),
builder.github_project_workdir('krb5/krb5', 'src'),
builder.autoconf_install('krb5/krb5'),
builder.github_project_workdir(
'rsocket/rsocket-cpp', 'yarpl/build'
),
builder.step('configuration for yarpl', [
builder.cmake_configure('rsocket/rsocket-cpp/yarpl/build'),
]),
builder.cmake_install('rsocket/rsocket-cpp/yarpl'),
builder.fb_github_cmake_install('fbthrift/thrift'),
],
}

Просмотреть файл

@ -1,39 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import specs.fbthrift as fbthrift
import specs.folly as folly
import specs.gmock as gmock
import specs.sodium as sodium
import specs.sigar as sigar
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
builder.add_option('zeromq/libzmq:git_hash', 'v4.2.5')
return {
'depends_on': [folly, fbthrift, gmock, sodium, sigar],
'steps': [
builder.github_project_workdir('zeromq/libzmq', '.'),
builder.step('Build and install zeromq/libzmq', [
builder.run(ShellQuoted('./autogen.sh')),
builder.configure(),
builder.make_and_install(),
]),
builder.fb_github_project_workdir('fbzmq/fbzmq/build', 'facebook'),
builder.step('Build and install fbzmq/fbzmq/build', [
builder.cmake_configure('fbzmq/fbzmq/build'),
# we need the pythonpath to find the thrift compiler
builder.run(ShellQuoted(
'PYTHONPATH="$PYTHONPATH:"{p}/lib/python2.7/site-packages '
'make -j {n}'
).format(p=builder.option('prefix'), n=builder.option('make_parallelism'))),
builder.run(ShellQuoted('make install')),
]),
],
}

Просмотреть файл

@ -1,20 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import specs.folly as folly
import specs.sodium as sodium
def fbcode_builder_spec(builder):
return {
'depends_on': [folly, sodium],
'steps': [
builder.fb_github_cmake_install(
'fizz/fizz/build',
github_org='facebookincubator',
),
],
}

Просмотреть файл

@ -1,13 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
def fbcode_builder_spec(builder):
return {
'steps': [
builder.fb_github_cmake_install('folly/folly'),
],
}

Просмотреть файл

@ -1,19 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
def fbcode_builder_spec(builder):
builder.add_option('google/googletest:git_hash', 'release-1.8.1')
builder.add_option(
'google/googletest:cmake_defines',
{'BUILD_GTEST': 'ON'}
)
return {
'steps': [
builder.github_project_workdir('google/googletest', 'build'),
builder.cmake_install('google/googletest'),
],
}

Просмотреть файл

@ -1,19 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import specs.folly as folly
import specs.fizz as fizz
import specs.sodium as sodium
import specs.wangle as wangle
def fbcode_builder_spec(builder):
return {
'depends_on': [folly, wangle, fizz, sodium],
'steps': [
builder.fb_github_autoconf_install('proxygen/proxygen'),
],
}

Просмотреть файл

@ -1,14 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
def fbcode_builder_spec(builder):
return {
'steps': [
builder.github_project_workdir('google/re2', 'build'),
builder.cmake_install('google/re2'),
],
}

Просмотреть файл

@ -1,22 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
builder.add_option(
'hyperic/sigar:autoconf_options', {'CFLAGS' : '-fgnu89-inline'})
return {
'steps': [
builder.github_project_workdir('hyperic/sigar', '.'),
builder.step('Build and install sigar', [
builder.run(ShellQuoted('./autogen.sh')),
builder.configure('hyperic/sigar'),
builder.make_and_install(),
]),
],
}

Просмотреть файл

@ -1,21 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
builder.add_option('jedisct1/libsodium:git_hash', 'stable')
return {
'steps': [
builder.github_project_workdir('jedisct1/libsodium', '.'),
builder.step('Build and install jedisct1/libsodium', [
builder.run(ShellQuoted('./autogen.sh')),
builder.configure(),
builder.make_and_install(),
]),
],
}

Просмотреть файл

@ -1,20 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import specs.folly as folly
import specs.fizz as fizz
import specs.sodium as sodium
def fbcode_builder_spec(builder):
# Projects that simply depend on Wangle need not spend time on tests.
builder.add_option('wangle/wangle/build:cmake_defines', {'BUILD_TESTS': 'OFF'})
return {
'depends_on': [folly, fizz, sodium],
'steps': [
builder.fb_github_cmake_install('wangle/wangle/build'),
],
}

Просмотреть файл

@ -1,25 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
# This API should change rarely, so build the latest tag instead of master.
builder.add_option(
'facebook/zstd:git_hash',
ShellQuoted('$(git describe --abbrev=0 --tags origin/master)')
)
return {
'steps': [
builder.github_project_workdir('facebook/zstd', '.'),
builder.step('Build and install zstd', [
builder.make_and_install(make_vars={
'PREFIX': builder.option('prefix'),
})
]),
],
}

Просмотреть файл

@ -1,41 +0,0 @@
#!/bin/bash -uex
# .travis.yml in the top-level dir explains why this is a separate script.
# Read the docs: ./make_docker_context.py --help
os_image=${os_image?Must be set by Travis}
gcc_version=${gcc_version?Must be set by Travis}
make_parallelism=${make_parallelism:-4}
# ccache is off unless requested
travis_cache_dir=${travis_cache_dir:-}
# The docker build never times out, unless specified
docker_build_timeout=${docker_build_timeout:-}
cur_dir="$(readlink -f "$(dirname "$0")")"
if [[ "$travis_cache_dir" == "" ]]; then
echo "ccache disabled, enable by setting env. var. travis_cache_dir"
ccache_tgz=""
elif [[ -e "$travis_cache_dir/ccache.tgz" ]]; then
ccache_tgz="$travis_cache_dir/ccache.tgz"
else
echo "$travis_cache_dir/ccache.tgz does not exist, starting with empty cache"
ccache_tgz=$(mktemp)
tar -T /dev/null -czf "$ccache_tgz"
fi
docker_context_dir=$(
cd "$cur_dir/.." # Let the script find our fbcode_builder_config.py
"$cur_dir/make_docker_context.py" \
--os-image "$os_image" \
--gcc-version "$gcc_version" \
--make-parallelism "$make_parallelism" \
--local-repo-dir "$cur_dir/../.." \
--ccache-tgz "$ccache_tgz"
)
cd "${docker_context_dir?Failed to make Docker context directory}"
# Make it safe to iterate on the .sh in the tree while the script runs.
cp "$cur_dir/docker_build_with_ccache.sh" .
exec ./docker_build_with_ccache.sh \
--build-timeout "$docker_build_timeout" \
"$travis_cache_dir"

Просмотреть файл

@ -1,99 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'Miscellaneous utility functions.'
import itertools
import logging
import os
import shutil
import subprocess
import sys
from contextlib import contextmanager
def recursively_flatten_list(l):
return itertools.chain.from_iterable(
(recursively_flatten_list(i) if type(i) is list else (i,))
for i in l
)
def run_command(*cmd, **kwargs):
'The stdout of most fbcode_builder utilities is meant to be parsed.'
logging.debug('Running: {0} with {1}'.format(cmd, kwargs))
kwargs['stdout'] = sys.stderr
subprocess.check_call(cmd, **kwargs)
@contextmanager
def make_temp_dir(d):
os.mkdir(d)
try:
yield d
finally:
shutil.rmtree(d, ignore_errors=True)
def _inner_read_config(path):
'''
Helper to read a named config file.
The grossness with the global is a workaround for this python bug:
https://bugs.python.org/issue21591
The bug prevents us from defining either a local function or a lambda
in the scope of read_fbcode_builder_config below.
'''
global _project_dir
full_path = os.path.join(_project_dir, path)
return read_fbcode_builder_config(full_path)
def read_fbcode_builder_config(filename):
# Allow one spec to read another
# When doing so, treat paths as relative to the config's project directory.
# _project_dir is a "local" for _inner_read_config; see the comments
# in that function for an explanation of the use of global.
global _project_dir
_project_dir = os.path.dirname(filename)
scope = {'read_fbcode_builder_config': _inner_read_config}
with open(filename) as config_file:
code = compile(config_file.read(), filename, mode='exec')
# Exec is generally unsafe. See B102 (exec_used). https://bandit.readthedocs.io/en/latest/plugins/b102_exec_used.html
# This is not shipping code, but build code that is part of folly.
# After reviewing the code in tis repo, this is only called with config files that are part of this repo,
# so no 3rd party code is evaluated.
exec(code, scope) # nosec
return scope['config']
def steps_for_spec(builder, spec, processed_modules=None):
'''
Sets `builder` configuration, and returns all the builder steps
necessary to build `spec` and its dependencies.
Traverses the dependencies in depth-first order, honoring the sequencing
in each 'depends_on' list.
'''
if processed_modules is None:
processed_modules = set()
steps = []
for module in spec.get('depends_on', []):
if module not in processed_modules:
processed_modules.add(module)
steps.extend(steps_for_spec(
builder,
module.fbcode_builder_spec(builder),
processed_modules
))
steps.extend(spec.get('steps', []))
return steps
def build_fbcode_builder_config(config):
return lambda builder: builder.build(
steps_for_spec(builder, config['fbcode_builder_spec'](builder))
)

Просмотреть файл

@ -1,41 +0,0 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
'fbcode_builder steps to build & test folly'
import specs.gmock as gmock
from shell_quoting import ShellQuoted
def fbcode_builder_spec(builder):
builder.add_option(
'folly/_build:cmake_defines',
{
'BUILD_SHARED_LIBS': 'OFF',
'BUILD_TESTS': 'ON',
}
)
return {
'depends_on': [gmock],
'steps': [
builder.fb_github_cmake_install('folly/_build'),
builder.step(
'Run folly tests', [
builder.run(
ShellQuoted('ctest --output-on-failure -j {n}')
.format(n=builder.option('make_parallelism'), )
)
]
),
]
}
config = {
'github_project': 'facebook/folly',
'fbcode_builder_spec': fbcode_builder_spec,
}

Просмотреть файл

@ -1,158 +0,0 @@
/*
* Copyright 2013-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <array>
#include <atomic>
#include <cassert>
#include <cstddef>
#include <limits>
#include <boost/noncopyable.hpp>
#include <folly/Portability.h>
namespace folly {
/**
* An atomic bitset of fixed size (specified at compile time).
*/
template <size_t N>
class AtomicBitSet : private boost::noncopyable {
public:
/**
* Construct an AtomicBitSet; all bits are initially false.
*/
AtomicBitSet();
/**
* Set bit idx to true, using the given memory order. Returns the
* previous value of the bit.
*
* Note that the operation is a read-modify-write operation due to the use
* of fetch_or.
*/
bool set(size_t idx, std::memory_order order = std::memory_order_seq_cst);
/**
* Set bit idx to false, using the given memory order. Returns the
* previous value of the bit.
*
* Note that the operation is a read-modify-write operation due to the use
* of fetch_and.
*/
bool reset(size_t idx, std::memory_order order = std::memory_order_seq_cst);
/**
* Set bit idx to the given value, using the given memory order. Returns
* the previous value of the bit.
*
* Note that the operation is a read-modify-write operation due to the use
* of fetch_and or fetch_or.
*
* Yes, this is an overload of set(), to keep as close to std::bitset's
* interface as possible.
*/
bool set(
size_t idx,
bool value,
std::memory_order order = std::memory_order_seq_cst);
/**
* Read bit idx.
*/
bool test(size_t idx, std::memory_order order = std::memory_order_seq_cst)
const;
/**
* Same as test() with the default memory order.
*/
bool operator[](size_t idx) const;
/**
* Return the size of the bitset.
*/
constexpr size_t size() const {
return N;
}
private:
// Pick the largest lock-free type available
#if (ATOMIC_LLONG_LOCK_FREE == 2)
typedef unsigned long long BlockType;
#elif (ATOMIC_LONG_LOCK_FREE == 2)
typedef unsigned long BlockType;
#else
// Even if not lock free, what can we do?
typedef unsigned int BlockType;
#endif
typedef std::atomic<BlockType> AtomicBlockType;
static constexpr size_t kBitsPerBlock =
std::numeric_limits<BlockType>::digits;
static constexpr size_t blockIndex(size_t bit) {
return bit / kBitsPerBlock;
}
static constexpr size_t bitOffset(size_t bit) {
return bit % kBitsPerBlock;
}
// avoid casts
static constexpr BlockType kOne = 1;
std::array<AtomicBlockType, N> data_;
};
// value-initialize to zero
template <size_t N>
inline AtomicBitSet<N>::AtomicBitSet() : data_() {}
template <size_t N>
inline bool AtomicBitSet<N>::set(size_t idx, std::memory_order order) {
assert(idx < N * kBitsPerBlock);
BlockType mask = kOne << bitOffset(idx);
return data_[blockIndex(idx)].fetch_or(mask, order) & mask;
}
template <size_t N>
inline bool AtomicBitSet<N>::reset(size_t idx, std::memory_order order) {
assert(idx < N * kBitsPerBlock);
BlockType mask = kOne << bitOffset(idx);
return data_[blockIndex(idx)].fetch_and(~mask, order) & mask;
}
template <size_t N>
inline bool
AtomicBitSet<N>::set(size_t idx, bool value, std::memory_order order) {
return value ? set(idx, order) : reset(idx, order);
}
template <size_t N>
inline bool AtomicBitSet<N>::test(size_t idx, std::memory_order order) const {
assert(idx < N * kBitsPerBlock);
BlockType mask = kOne << bitOffset(idx);
return data_[blockIndex(idx)].load(order) & mask;
}
template <size_t N>
inline bool AtomicBitSet<N>::operator[](size_t idx) const {
return test(idx);
}
} // namespace folly

Просмотреть файл

@ -1,543 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef FOLLY_ATOMICHASHARRAY_H_
#error "This should only be included by AtomicHashArray.h"
#endif
#include <type_traits>
#include <folly/detail/AtomicHashUtils.h>
#include <folly/lang/Bits.h>
namespace folly {
// AtomicHashArray private constructor --
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::
AtomicHashArray(
size_t capacity,
KeyT emptyKey,
KeyT lockedKey,
KeyT erasedKey,
double _maxLoadFactor,
uint32_t cacheSize)
: capacity_(capacity),
maxEntries_(size_t(_maxLoadFactor * capacity_ + 0.5)),
kEmptyKey_(emptyKey),
kLockedKey_(lockedKey),
kErasedKey_(erasedKey),
kAnchorMask_(nextPowTwo(capacity_) - 1),
numEntries_(0, cacheSize),
numPendingEntries_(0, cacheSize),
isFull_(0),
numErases_(0) {}
/*
* findInternal --
*
* Sets ret.second to value found and ret.index to index
* of key and returns true, or if key does not exist returns false and
* ret.index is set to capacity_.
*/
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
template <class LookupKeyT, class LookupHashFcn, class LookupEqualFcn>
typename AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SimpleRetT
AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::findInternal(const LookupKeyT key_in) {
checkLegalKeyIfKey<LookupKeyT>(key_in);
for (size_t idx = keyToAnchorIdx<LookupKeyT, LookupHashFcn>(key_in),
numProbes = 0;
;
idx = ProbeFcn()(idx, numProbes, capacity_)) {
const KeyT key = acquireLoadKey(cells_[idx]);
if (LIKELY(LookupEqualFcn()(key, key_in))) {
return SimpleRetT(idx, true);
}
if (UNLIKELY(key == kEmptyKey_)) {
// if we hit an empty element, this key does not exist
return SimpleRetT(capacity_, false);
}
// NOTE: the way we count numProbes must be same in find(), insert(),
// and erase(). Otherwise it may break probing.
++numProbes;
if (UNLIKELY(numProbes >= capacity_)) {
// probed every cell...fail
return SimpleRetT(capacity_, false);
}
}
}
/*
* insertInternal --
*
* Returns false on failure due to key collision or full.
* Also sets ret.index to the index of the key. If the map is full, sets
* ret.index = capacity_. Also sets ret.second to cell value, thus if insert
* successful this will be what we just inserted, if there is a key collision
* this will be the previously inserted value, and if the map is full it is
* default.
*/
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
template <
typename LookupKeyT,
typename LookupHashFcn,
typename LookupEqualFcn,
typename LookupKeyToKeyFcn,
typename... ArgTs>
typename AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SimpleRetT
AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::insertInternal(LookupKeyT key_in, ArgTs&&... vCtorArgs) {
const short NO_NEW_INSERTS = 1;
const short NO_PENDING_INSERTS = 2;
checkLegalKeyIfKey<LookupKeyT>(key_in);
size_t idx = keyToAnchorIdx<LookupKeyT, LookupHashFcn>(key_in);
size_t numProbes = 0;
for (;;) {
DCHECK_LT(idx, capacity_);
value_type* cell = &cells_[idx];
if (relaxedLoadKey(*cell) == kEmptyKey_) {
// NOTE: isFull_ is set based on numEntries_.readFast(), so it's
// possible to insert more than maxEntries_ entries. However, it's not
// possible to insert past capacity_.
++numPendingEntries_;
if (isFull_.load(std::memory_order_acquire)) {
--numPendingEntries_;
// Before deciding whether this insert succeeded, this thread needs to
// wait until no other thread can add a new entry.
// Correctness assumes isFull_ is true at this point. If
// another thread now does ++numPendingEntries_, we expect it
// to pass the isFull_.load() test above. (It shouldn't insert
// a new entry.)
detail::atomic_hash_spin_wait([&] {
return (isFull_.load(std::memory_order_acquire) !=
NO_PENDING_INSERTS) &&
(numPendingEntries_.readFull() != 0);
});
isFull_.store(NO_PENDING_INSERTS, std::memory_order_release);
if (relaxedLoadKey(*cell) == kEmptyKey_) {
// Don't insert past max load factor
return SimpleRetT(capacity_, false);
}
} else {
// An unallocated cell. Try once to lock it. If we succeed, insert here.
// If we fail, fall through to comparison below; maybe the insert that
// just beat us was for this very key....
if (tryLockCell(cell)) {
KeyT key_new;
// Write the value - done before unlocking
try {
key_new = LookupKeyToKeyFcn()(key_in);
typedef
typename std::remove_const<LookupKeyT>::type LookupKeyTNoConst;
constexpr bool kAlreadyChecked =
std::is_same<KeyT, LookupKeyTNoConst>::value;
if (!kAlreadyChecked) {
checkLegalKeyIfKey(key_new);
}
DCHECK(relaxedLoadKey(*cell) == kLockedKey_);
// A const mapped_type is only constant once constructed, so cast
// away any const for the placement new here.
using mapped = typename std::remove_const<mapped_type>::type;
new (const_cast<mapped*>(&cell->second))
ValueT(std::forward<ArgTs>(vCtorArgs)...);
unlockCell(cell, key_new); // Sets the new key
} catch (...) {
// Transition back to empty key---requires handling
// locked->empty below.
unlockCell(cell, kEmptyKey_);
--numPendingEntries_;
throw;
}
// An erase() can race here and delete right after our insertion
// Direct comparison rather than EqualFcn ok here
// (we just inserted it)
DCHECK(
relaxedLoadKey(*cell) == key_new ||
relaxedLoadKey(*cell) == kErasedKey_);
--numPendingEntries_;
++numEntries_; // This is a thread cached atomic increment :)
if (numEntries_.readFast() >= maxEntries_) {
isFull_.store(NO_NEW_INSERTS, std::memory_order_relaxed);
}
return SimpleRetT(idx, true);
}
--numPendingEntries_;
}
}
DCHECK(relaxedLoadKey(*cell) != kEmptyKey_);
if (kLockedKey_ == acquireLoadKey(*cell)) {
detail::atomic_hash_spin_wait(
[&] { return kLockedKey_ == acquireLoadKey(*cell); });
}
const KeyT thisKey = acquireLoadKey(*cell);
if (LookupEqualFcn()(thisKey, key_in)) {
// Found an existing entry for our key, but we don't overwrite the
// previous value.
return SimpleRetT(idx, false);
} else if (thisKey == kEmptyKey_ || thisKey == kLockedKey_) {
// We need to try again (i.e., don't increment numProbes or
// advance idx): this case can happen if the constructor for
// ValueT threw for this very cell (the rethrow block above).
continue;
}
// NOTE: the way we count numProbes must be same in find(),
// insert(), and erase(). Otherwise it may break probing.
++numProbes;
if (UNLIKELY(numProbes >= capacity_)) {
// probed every cell...fail
return SimpleRetT(capacity_, false);
}
idx = ProbeFcn()(idx, numProbes, capacity_);
}
}
/*
* erase --
*
* This will attempt to erase the given key key_in if the key is found. It
* returns 1 iff the key was located and marked as erased, and 0 otherwise.
*
* Memory is not freed or reclaimed by erase, i.e. the cell containing the
* erased key will never be reused. If there's an associated value, we won't
* touch it either.
*/
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
size_t AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::erase(KeyT key_in) {
CHECK_NE(key_in, kEmptyKey_);
CHECK_NE(key_in, kLockedKey_);
CHECK_NE(key_in, kErasedKey_);
for (size_t idx = keyToAnchorIdx(key_in), numProbes = 0;;
idx = ProbeFcn()(idx, numProbes, capacity_)) {
DCHECK_LT(idx, capacity_);
value_type* cell = &cells_[idx];
KeyT currentKey = acquireLoadKey(*cell);
if (currentKey == kEmptyKey_ || currentKey == kLockedKey_) {
// If we hit an empty (or locked) element, this key does not exist. This
// is similar to how it's handled in find().
return 0;
}
if (EqualFcn()(currentKey, key_in)) {
// Found an existing entry for our key, attempt to mark it erased.
// Some other thread may have erased our key, but this is ok.
KeyT expect = currentKey;
if (cellKeyPtr(*cell)->compare_exchange_strong(expect, kErasedKey_)) {
numErases_.fetch_add(1, std::memory_order_relaxed);
// Even if there's a value in the cell, we won't delete (or even
// default construct) it because some other thread may be accessing it.
// Locking it meanwhile won't work either since another thread may be
// holding a pointer to it.
// We found the key and successfully erased it.
return 1;
}
// If another thread succeeds in erasing our key, we'll stop our search.
return 0;
}
// NOTE: the way we count numProbes must be same in find(), insert(),
// and erase(). Otherwise it may break probing.
++numProbes;
if (UNLIKELY(numProbes >= capacity_)) {
// probed every cell...fail
return 0;
}
}
}
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
typename AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SmartPtr
AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::create(size_t maxSize, const Config& c) {
CHECK_LE(c.maxLoadFactor, 1.0);
CHECK_GT(c.maxLoadFactor, 0.0);
CHECK_NE(c.emptyKey, c.lockedKey);
size_t capacity = size_t(maxSize / c.maxLoadFactor);
size_t sz = sizeof(AtomicHashArray) + sizeof(value_type) * capacity;
auto const mem = Allocator().allocate(sz);
try {
new (mem) AtomicHashArray(
capacity,
c.emptyKey,
c.lockedKey,
c.erasedKey,
c.maxLoadFactor,
c.entryCountThreadCacheSize);
} catch (...) {
Allocator().deallocate(mem, sz);
throw;
}
SmartPtr map(static_cast<AtomicHashArray*>((void*)mem));
/*
* Mark all cells as empty.
*
* Note: we're bending the rules a little here accessing the key
* element in our cells even though the cell object has not been
* constructed, and casting them to atomic objects (see cellKeyPtr).
* (Also, in fact we never actually invoke the value_type
* constructor.) This is in order to avoid needing to default
* construct a bunch of value_type when we first start up: if you
* have an expensive default constructor for the value type this can
* noticeably speed construction time for an AHA.
*/
FOR_EACH_RANGE (i, 0, map->capacity_) {
cellKeyPtr(map->cells_[i])
->store(map->kEmptyKey_, std::memory_order_relaxed);
}
return map;
}
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
void AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::destroy(AtomicHashArray* p) {
assert(p);
size_t sz = sizeof(AtomicHashArray) + sizeof(value_type) * p->capacity_;
FOR_EACH_RANGE (i, 0, p->capacity_) {
if (p->cells_[i].first != p->kEmptyKey_) {
p->cells_[i].~value_type();
}
}
p->~AtomicHashArray();
Allocator().deallocate((char*)p, sz);
}
// clear -- clears all keys and values in the map and resets all counters
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
void AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::clear() {
FOR_EACH_RANGE (i, 0, capacity_) {
if (cells_[i].first != kEmptyKey_) {
cells_[i].~value_type();
*const_cast<KeyT*>(&cells_[i].first) = kEmptyKey_;
}
CHECK(cells_[i].first == kEmptyKey_);
}
numEntries_.set(0);
numPendingEntries_.set(0);
isFull_.store(0, std::memory_order_relaxed);
numErases_.store(0, std::memory_order_relaxed);
}
// Iterator implementation
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
template <class ContT, class IterVal>
struct AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::aha_iterator
: boost::iterator_facade<
aha_iterator<ContT, IterVal>,
IterVal,
boost::forward_traversal_tag> {
explicit aha_iterator() : aha_(nullptr) {}
// Conversion ctor for interoperability between const_iterator and
// iterator. The enable_if<> magic keeps us well-behaved for
// is_convertible<> (v. the iterator_facade documentation).
template <class OtherContT, class OtherVal>
aha_iterator(
const aha_iterator<OtherContT, OtherVal>& o,
typename std::enable_if<
std::is_convertible<OtherVal*, IterVal*>::value>::type* = nullptr)
: aha_(o.aha_), offset_(o.offset_) {}
explicit aha_iterator(ContT* array, size_t offset)
: aha_(array), offset_(offset) {}
// Returns unique index that can be used with findAt().
// WARNING: The following function will fail silently for hashtable
// with capacity > 2^32
uint32_t getIndex() const {
return offset_;
}
void advancePastEmpty() {
while (offset_ < aha_->capacity_ && !isValid()) {
++offset_;
}
}
private:
friend class AtomicHashArray;
friend class boost::iterator_core_access;
void increment() {
++offset_;
advancePastEmpty();
}
bool equal(const aha_iterator& o) const {
return aha_ == o.aha_ && offset_ == o.offset_;
}
IterVal& dereference() const {
return aha_->cells_[offset_];
}
bool isValid() const {
KeyT key = acquireLoadKey(aha_->cells_[offset_]);
return key != aha_->kEmptyKey_ && key != aha_->kLockedKey_ &&
key != aha_->kErasedKey_;
}
private:
ContT* aha_;
size_t offset_;
}; // aha_iterator
} // namespace folly

Просмотреть файл

@ -1,448 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* AtomicHashArray is the building block for AtomicHashMap. It provides the
* core lock-free functionality, but is limited by the fact that it cannot
* grow past its initialization size and is a little more awkward (no public
* constructor, for example). If you're confident that you won't run out of
* space, don't mind the awkardness, and really need bare-metal performance,
* feel free to use AHA directly.
*
* Check out AtomicHashMap.h for more thorough documentation on perf and
* general pros and cons relative to other hash maps.
*
* @author Spencer Ahrens <sahrens@fb.com>
* @author Jordan DeLong <delong.j@fb.com>
*/
#pragma once
#define FOLLY_ATOMICHASHARRAY_H_
#include <atomic>
#include <boost/iterator/iterator_facade.hpp>
#include <boost/noncopyable.hpp>
#include <folly/ThreadCachedInt.h>
#include <folly/Utility.h>
#include <folly/hash/Hash.h>
namespace folly {
struct AtomicHashArrayLinearProbeFcn {
inline size_t operator()(size_t idx, size_t /* numProbes */, size_t capacity)
const {
idx += 1; // linear probing
// Avoid modulus because it's slow
return LIKELY(idx < capacity) ? idx : (idx - capacity);
}
};
struct AtomicHashArrayQuadraticProbeFcn {
inline size_t operator()(size_t idx, size_t numProbes, size_t capacity)
const {
idx += numProbes; // quadratic probing
// Avoid modulus because it's slow
return LIKELY(idx < capacity) ? idx : (idx - capacity);
}
};
// Enables specializing checkLegalKey without specializing its class.
namespace detail {
template <typename NotKeyT, typename KeyT>
inline void checkLegalKeyIfKeyTImpl(
NotKeyT /* ignored */,
KeyT /* emptyKey */,
KeyT /* lockedKey */,
KeyT /* erasedKey */) {}
template <typename KeyT>
inline void checkLegalKeyIfKeyTImpl(
KeyT key_in,
KeyT emptyKey,
KeyT lockedKey,
KeyT erasedKey) {
DCHECK_NE(key_in, emptyKey);
DCHECK_NE(key_in, lockedKey);
DCHECK_NE(key_in, erasedKey);
}
} // namespace detail
template <
class KeyT,
class ValueT,
class HashFcn = std::hash<KeyT>,
class EqualFcn = std::equal_to<KeyT>,
class Allocator = std::allocator<char>,
class ProbeFcn = AtomicHashArrayLinearProbeFcn,
class KeyConvertFcn = Identity>
class AtomicHashMap;
template <
class KeyT,
class ValueT,
class HashFcn = std::hash<KeyT>,
class EqualFcn = std::equal_to<KeyT>,
class Allocator = std::allocator<char>,
class ProbeFcn = AtomicHashArrayLinearProbeFcn,
class KeyConvertFcn = Identity>
class AtomicHashArray : boost::noncopyable {
static_assert(
(std::is_convertible<KeyT, int32_t>::value ||
std::is_convertible<KeyT, int64_t>::value ||
std::is_convertible<KeyT, const void*>::value),
"You are trying to use AtomicHashArray with disallowed key "
"types. You must use atomically compare-and-swappable integer "
"keys, or a different container class.");
public:
typedef KeyT key_type;
typedef ValueT mapped_type;
typedef HashFcn hasher;
typedef EqualFcn key_equal;
typedef KeyConvertFcn key_convert;
typedef std::pair<const KeyT, ValueT> value_type;
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef value_type* pointer;
typedef const value_type* const_pointer;
const size_t capacity_;
const size_t maxEntries_;
const KeyT kEmptyKey_;
const KeyT kLockedKey_;
const KeyT kErasedKey_;
template <class ContT, class IterVal>
struct aha_iterator;
typedef aha_iterator<const AtomicHashArray, const value_type> const_iterator;
typedef aha_iterator<AtomicHashArray, value_type> iterator;
// You really shouldn't need this if you use the SmartPtr provided by create,
// but if you really want to do something crazy like stick the released
// pointer into a DescriminatedPtr or something, you'll need this to clean up
// after yourself.
static void destroy(AtomicHashArray*);
private:
const size_t kAnchorMask_;
struct Deleter {
void operator()(AtomicHashArray* ptr) {
AtomicHashArray::destroy(ptr);
}
};
public:
typedef std::unique_ptr<AtomicHashArray, Deleter> SmartPtr;
/*
* create --
*
* Creates AtomicHashArray objects. Use instead of constructor/destructor.
*
* We do things this way in order to avoid the perf penalty of a second
* pointer indirection when composing these into AtomicHashMap, which needs
* to store an array of pointers so that it can perform atomic operations on
* them when growing.
*
* Instead of a mess of arguments, we take a max size and a Config struct to
* simulate named ctor parameters. The Config struct has sensible defaults
* for everything, but is overloaded - if you specify a positive capacity,
* that will be used directly instead of computing it based on
* maxLoadFactor.
*
* Create returns an AHA::SmartPtr which is a unique_ptr with a custom
* deleter to make sure everything is cleaned up properly.
*/
struct Config {
KeyT emptyKey;
KeyT lockedKey;
KeyT erasedKey;
double maxLoadFactor;
double growthFactor;
uint32_t entryCountThreadCacheSize;
size_t capacity; // if positive, overrides maxLoadFactor
// Cannot have constexpr ctor because some compilers rightly complain.
Config()
: emptyKey((KeyT)-1),
lockedKey((KeyT)-2),
erasedKey((KeyT)-3),
maxLoadFactor(0.8),
growthFactor(-1),
entryCountThreadCacheSize(1000),
capacity(0) {}
};
// Cannot have pre-instantiated const Config instance because of SIOF.
static SmartPtr create(size_t maxSize, const Config& c = Config());
/*
* find --
*
*
* Returns the iterator to the element if found, otherwise end().
*
* As an optional feature, the type of the key to look up (LookupKeyT) is
* allowed to be different from the type of keys actually stored (KeyT).
*
* This enables use cases where materializing the key is costly and usually
* redudant, e.g., canonicalizing/interning a set of strings and being able
* to look up by StringPiece. To use this feature, LookupHashFcn must take
* a LookupKeyT, and LookupEqualFcn must take KeyT and LookupKeyT as first
* and second parameter, respectively.
*
* See folly/test/ArrayHashArrayTest.cpp for sample usage.
*/
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
iterator find(LookupKeyT k) {
return iterator(
this, findInternal<LookupKeyT, LookupHashFcn, LookupEqualFcn>(k).idx);
}
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
const_iterator find(LookupKeyT k) const {
return const_cast<AtomicHashArray*>(this)
->find<LookupKeyT, LookupHashFcn, LookupEqualFcn>(k);
}
/*
* insert --
*
* Returns a pair with iterator to the element at r.first and bool success.
* Retrieve the index with ret.first.getIndex().
*
* Fails on key collision (does not overwrite) or if map becomes
* full, at which point no element is inserted, iterator is set to end(),
* and success is set false. On collisions, success is set false, but the
* iterator is set to the existing entry.
*/
std::pair<iterator, bool> insert(const value_type& r) {
return emplace(r.first, r.second);
}
std::pair<iterator, bool> insert(value_type&& r) {
return emplace(r.first, std::move(r.second));
}
/*
* emplace --
*
* Same contract as insert(), but performs in-place construction
* of the value type using the specified arguments.
*
* Also, like find(), this method optionally allows 'key_in' to have a type
* different from that stored in the table; see find(). If and only if no
* equal key is already present, this method converts 'key_in' to a key of
* type KeyT using the provided LookupKeyToKeyFcn.
*/
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal,
typename LookupKeyToKeyFcn = key_convert,
typename... ArgTs>
std::pair<iterator, bool> emplace(LookupKeyT key_in, ArgTs&&... vCtorArgs) {
SimpleRetT ret = insertInternal<
LookupKeyT,
LookupHashFcn,
LookupEqualFcn,
LookupKeyToKeyFcn>(key_in, std::forward<ArgTs>(vCtorArgs)...);
return std::make_pair(iterator(this, ret.idx), ret.success);
}
// returns the number of elements erased - should never exceed 1
size_t erase(KeyT k);
// clears all keys and values in the map and resets all counters. Not thread
// safe.
void clear();
// Exact number of elements in the map - note that readFull() acquires a
// mutex. See folly/ThreadCachedInt.h for more details.
size_t size() const {
return numEntries_.readFull() - numErases_.load(std::memory_order_relaxed);
}
bool empty() const {
return size() == 0;
}
iterator begin() {
iterator it(this, 0);
it.advancePastEmpty();
return it;
}
const_iterator begin() const {
const_iterator it(this, 0);
it.advancePastEmpty();
return it;
}
iterator end() {
return iterator(this, capacity_);
}
const_iterator end() const {
return const_iterator(this, capacity_);
}
// See AtomicHashMap::findAt - access elements directly
// WARNING: The following 2 functions will fail silently for hashtable
// with capacity > 2^32
iterator findAt(uint32_t idx) {
DCHECK_LT(idx, capacity_);
return iterator(this, idx);
}
const_iterator findAt(uint32_t idx) const {
return const_cast<AtomicHashArray*>(this)->findAt(idx);
}
iterator makeIter(size_t idx) {
return iterator(this, idx);
}
const_iterator makeIter(size_t idx) const {
return const_iterator(this, idx);
}
// The max load factor allowed for this map
double maxLoadFactor() const {
return ((double)maxEntries_) / capacity_;
}
void setEntryCountThreadCacheSize(uint32_t newSize) {
numEntries_.setCacheSize(newSize);
numPendingEntries_.setCacheSize(newSize);
}
uint32_t getEntryCountThreadCacheSize() const {
return numEntries_.getCacheSize();
}
/* Private data and helper functions... */
private:
friend class AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn>;
struct SimpleRetT {
size_t idx;
bool success;
SimpleRetT(size_t i, bool s) : idx(i), success(s) {}
SimpleRetT() = default;
};
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal,
typename LookupKeyToKeyFcn = Identity,
typename... ArgTs>
SimpleRetT insertInternal(LookupKeyT key, ArgTs&&... vCtorArgs);
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
SimpleRetT findInternal(const LookupKeyT key);
template <typename MaybeKeyT>
void checkLegalKeyIfKey(MaybeKeyT key) {
detail::checkLegalKeyIfKeyTImpl(key, kEmptyKey_, kLockedKey_, kErasedKey_);
}
static std::atomic<KeyT>* cellKeyPtr(const value_type& r) {
// We need some illegal casting here in order to actually store
// our value_type as a std::pair<const,>. But a little bit of
// undefined behavior never hurt anyone ...
static_assert(
sizeof(std::atomic<KeyT>) == sizeof(KeyT),
"std::atomic is implemented in an unexpected way for AHM");
return const_cast<std::atomic<KeyT>*>(
reinterpret_cast<std::atomic<KeyT> const*>(&r.first));
}
static KeyT relaxedLoadKey(const value_type& r) {
return cellKeyPtr(r)->load(std::memory_order_relaxed);
}
static KeyT acquireLoadKey(const value_type& r) {
return cellKeyPtr(r)->load(std::memory_order_acquire);
}
// Fun with thread local storage - atomic increment is expensive
// (relatively), so we accumulate in the thread cache and periodically
// flush to the actual variable, and walk through the unflushed counts when
// reading the value, so be careful of calling size() too frequently. This
// increases insertion throughput several times over while keeping the count
// accurate.
ThreadCachedInt<uint64_t> numEntries_; // Successful key inserts
ThreadCachedInt<uint64_t> numPendingEntries_; // Used by insertInternal
std::atomic<int64_t> isFull_; // Used by insertInternal
std::atomic<int64_t> numErases_; // Successful key erases
value_type cells_[0]; // This must be the last field of this class
// Force constructor/destructor private since create/destroy should be
// used externally instead
AtomicHashArray(
size_t capacity,
KeyT emptyKey,
KeyT lockedKey,
KeyT erasedKey,
double maxLoadFactor,
uint32_t cacheSize);
~AtomicHashArray() = default;
inline void unlockCell(value_type* const cell, KeyT newKey) {
cellKeyPtr(*cell)->store(newKey, std::memory_order_release);
}
inline bool tryLockCell(value_type* const cell) {
KeyT expect = kEmptyKey_;
return cellKeyPtr(*cell)->compare_exchange_strong(
expect, kLockedKey_, std::memory_order_acq_rel);
}
template <class LookupKeyT = key_type, class LookupHashFcn = hasher>
inline size_t keyToAnchorIdx(const LookupKeyT k) const {
const size_t hashVal = LookupHashFcn()(k);
const size_t probe = hashVal & kAnchorMask_;
return LIKELY(probe < capacity_) ? probe : hashVal % capacity_;
}
}; // AtomicHashArray
} // namespace folly
#include <folly/AtomicHashArray-inl.h>

Просмотреть файл

@ -1,653 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef FOLLY_ATOMICHASHMAP_H_
#error "This should only be included by AtomicHashMap.h"
#endif
#include <folly/detail/AtomicHashUtils.h>
namespace folly {
// AtomicHashMap constructor -- Atomic wrapper that allows growth
// This class has a lot of overhead (184 Bytes) so only use for big maps
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::AtomicHashMap(size_t finalSizeEst, const Config& config)
: kGrowthFrac_(
config.growthFactor < 0 ? 1.0f - config.maxLoadFactor
: config.growthFactor) {
CHECK(config.maxLoadFactor > 0.0f && config.maxLoadFactor < 1.0f);
subMaps_[0].store(
SubMap::create(finalSizeEst, config).release(),
std::memory_order_relaxed);
auto subMapCount = kNumSubMaps_;
FOR_EACH_RANGE (i, 1, subMapCount) {
subMaps_[i].store(nullptr, std::memory_order_relaxed);
}
numMapsAllocated_.store(1, std::memory_order_relaxed);
}
// emplace --
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <
typename LookupKeyT,
typename LookupHashFcn,
typename LookupEqualFcn,
typename LookupKeyToKeyFcn,
typename... ArgTs>
std::pair<
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::iterator,
bool>
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::emplace(LookupKeyT k, ArgTs&&... vCtorArgs) {
SimpleRetT ret = insertInternal<
LookupKeyT,
LookupHashFcn,
LookupEqualFcn,
LookupKeyToKeyFcn>(k, std::forward<ArgTs>(vCtorArgs)...);
SubMap* subMap = subMaps_[ret.i].load(std::memory_order_relaxed);
return std::make_pair(
iterator(this, ret.i, subMap->makeIter(ret.j)), ret.success);
}
// insertInternal -- Allocates new sub maps as existing ones fill up.
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <
typename LookupKeyT,
typename LookupHashFcn,
typename LookupEqualFcn,
typename LookupKeyToKeyFcn,
typename... ArgTs>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SimpleRetT
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::insertInternal(LookupKeyT key, ArgTs&&... vCtorArgs) {
beginInsertInternal:
auto nextMapIdx = // this maintains our state
numMapsAllocated_.load(std::memory_order_acquire);
typename SubMap::SimpleRetT ret;
FOR_EACH_RANGE (i, 0, nextMapIdx) {
// insert in each map successively. If one succeeds, we're done!
SubMap* subMap = subMaps_[i].load(std::memory_order_relaxed);
ret = subMap->template insertInternal<
LookupKeyT,
LookupHashFcn,
LookupEqualFcn,
LookupKeyToKeyFcn>(key, std::forward<ArgTs>(vCtorArgs)...);
if (ret.idx == subMap->capacity_) {
continue; // map is full, so try the next one
}
// Either collision or success - insert in either case
return SimpleRetT(i, ret.idx, ret.success);
}
// If we made it this far, all maps are full and we need to try to allocate
// the next one.
SubMap* primarySubMap = subMaps_[0].load(std::memory_order_relaxed);
if (nextMapIdx >= kNumSubMaps_ ||
primarySubMap->capacity_ * kGrowthFrac_ < 1.0) {
// Can't allocate any more sub maps.
throw AtomicHashMapFullError();
}
if (tryLockMap(nextMapIdx)) {
// Alloc a new map and shove it in. We can change whatever
// we want because other threads are waiting on us...
size_t numCellsAllocated = (size_t)(
primarySubMap->capacity_ *
std::pow(1.0 + kGrowthFrac_, nextMapIdx - 1));
size_t newSize = size_t(numCellsAllocated * kGrowthFrac_);
DCHECK(
subMaps_[nextMapIdx].load(std::memory_order_relaxed) ==
(SubMap*)kLockedPtr_);
// create a new map using the settings stored in the first map
Config config;
config.emptyKey = primarySubMap->kEmptyKey_;
config.lockedKey = primarySubMap->kLockedKey_;
config.erasedKey = primarySubMap->kErasedKey_;
config.maxLoadFactor = primarySubMap->maxLoadFactor();
config.entryCountThreadCacheSize =
primarySubMap->getEntryCountThreadCacheSize();
subMaps_[nextMapIdx].store(
SubMap::create(newSize, config).release(), std::memory_order_relaxed);
// Publish the map to other threads.
numMapsAllocated_.fetch_add(1, std::memory_order_release);
DCHECK_EQ(
nextMapIdx + 1, numMapsAllocated_.load(std::memory_order_relaxed));
} else {
// If we lost the race, we'll have to wait for the next map to get
// allocated before doing any insertion here.
detail::atomic_hash_spin_wait([&] {
return nextMapIdx >= numMapsAllocated_.load(std::memory_order_acquire);
});
}
// Relaxed is ok here because either we just created this map, or we
// just did a spin wait with an acquire load on numMapsAllocated_.
SubMap* loadedMap = subMaps_[nextMapIdx].load(std::memory_order_relaxed);
DCHECK(loadedMap && loadedMap != (SubMap*)kLockedPtr_);
ret = loadedMap->insertInternal(key, std::forward<ArgTs>(vCtorArgs)...);
if (ret.idx != loadedMap->capacity_) {
return SimpleRetT(nextMapIdx, ret.idx, ret.success);
}
// We took way too long and the new map is already full...try again from
// the top (this should pretty much never happen).
goto beginInsertInternal;
}
// find --
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <class LookupKeyT, class LookupHashFcn, class LookupEqualFcn>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::iterator
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::find(LookupKeyT k) {
SimpleRetT ret = findInternal<LookupKeyT, LookupHashFcn, LookupEqualFcn>(k);
if (!ret.success) {
return end();
}
SubMap* subMap = subMaps_[ret.i].load(std::memory_order_relaxed);
return iterator(this, ret.i, subMap->makeIter(ret.j));
}
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <class LookupKeyT, class LookupHashFcn, class LookupEqualFcn>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::const_iterator
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::find(LookupKeyT k) const {
return const_cast<AtomicHashMap*>(this)
->find<LookupKeyT, LookupHashFcn, LookupEqualFcn>(k);
}
// findInternal --
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <class LookupKeyT, class LookupHashFcn, class LookupEqualFcn>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SimpleRetT
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::findInternal(const LookupKeyT k) const {
SubMap* const primaryMap = subMaps_[0].load(std::memory_order_relaxed);
typename SubMap::SimpleRetT ret =
primaryMap
->template findInternal<LookupKeyT, LookupHashFcn, LookupEqualFcn>(k);
if (LIKELY(ret.idx != primaryMap->capacity_)) {
return SimpleRetT(0, ret.idx, ret.success);
}
const unsigned int numMaps =
numMapsAllocated_.load(std::memory_order_acquire);
FOR_EACH_RANGE (i, 1, numMaps) {
// Check each map successively. If one succeeds, we're done!
SubMap* thisMap = subMaps_[i].load(std::memory_order_relaxed);
ret =
thisMap
->template findInternal<LookupKeyT, LookupHashFcn, LookupEqualFcn>(
k);
if (LIKELY(ret.idx != thisMap->capacity_)) {
return SimpleRetT(i, ret.idx, ret.success);
}
}
// Didn't find our key...
return SimpleRetT(numMaps, 0, false);
}
// findAtInternal -- see encodeIndex() for details.
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::SimpleRetT
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::findAtInternal(uint32_t idx) const {
uint32_t subMapIdx, subMapOffset;
if (idx & kSecondaryMapBit_) {
// idx falls in a secondary map
idx &= ~kSecondaryMapBit_; // unset secondary bit
subMapIdx = idx >> kSubMapIndexShift_;
DCHECK_LT(subMapIdx, numMapsAllocated_.load(std::memory_order_relaxed));
subMapOffset = idx & kSubMapIndexMask_;
} else {
// idx falls in primary map
subMapIdx = 0;
subMapOffset = idx;
}
return SimpleRetT(subMapIdx, subMapOffset, true);
}
// erase --
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
typename AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::size_type
AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::erase(const KeyT k) {
int const numMaps = numMapsAllocated_.load(std::memory_order_acquire);
FOR_EACH_RANGE (i, 0, numMaps) {
// Check each map successively. If one succeeds, we're done!
if (subMaps_[i].load(std::memory_order_relaxed)->erase(k)) {
return 1;
}
}
// Didn't find our key...
return 0;
}
// capacity -- summation of capacities of all submaps
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
size_t AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::capacity() const {
size_t totalCap(0);
int const numMaps = numMapsAllocated_.load(std::memory_order_acquire);
FOR_EACH_RANGE (i, 0, numMaps) {
totalCap += subMaps_[i].load(std::memory_order_relaxed)->capacity_;
}
return totalCap;
}
// spaceRemaining --
// number of new insertions until current submaps are all at max load
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
size_t AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::spaceRemaining() const {
size_t spaceRem(0);
int const numMaps = numMapsAllocated_.load(std::memory_order_acquire);
FOR_EACH_RANGE (i, 0, numMaps) {
SubMap* thisMap = subMaps_[i].load(std::memory_order_relaxed);
spaceRem +=
std::max(0, thisMap->maxEntries_ - &thisMap->numEntries_.readFull());
}
return spaceRem;
}
// clear -- Wipes all keys and values from primary map and destroys
// all secondary maps. Not thread safe.
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
void AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::clear() {
subMaps_[0].load(std::memory_order_relaxed)->clear();
int const numMaps = numMapsAllocated_.load(std::memory_order_relaxed);
FOR_EACH_RANGE (i, 1, numMaps) {
SubMap* thisMap = subMaps_[i].load(std::memory_order_relaxed);
DCHECK(thisMap);
SubMap::destroy(thisMap);
subMaps_[i].store(nullptr, std::memory_order_relaxed);
}
numMapsAllocated_.store(1, std::memory_order_relaxed);
}
// size --
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
size_t AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::size() const {
size_t totalSize(0);
int const numMaps = numMapsAllocated_.load(std::memory_order_acquire);
FOR_EACH_RANGE (i, 0, numMaps) {
totalSize += subMaps_[i].load(std::memory_order_relaxed)->size();
}
return totalSize;
}
// encodeIndex -- Encode the submap index and offset into return.
// index_ret must be pre-populated with the submap offset.
//
// We leave index_ret untouched when referring to the primary map
// so it can be as large as possible (31 data bits). Max size of
// secondary maps is limited by what can fit in the low 27 bits.
//
// Returns the following bit-encoded data in index_ret:
// if subMap == 0 (primary map) =>
// bit(s) value
// 31 0
// 0-30 submap offset (index_ret input)
//
// if subMap > 0 (secondary maps) =>
// bit(s) value
// 31 1
// 27-30 which subMap
// 0-26 subMap offset (index_ret input)
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
inline uint32_t AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::encodeIndex(uint32_t subMap, uint32_t offset) {
DCHECK_EQ(offset & kSecondaryMapBit_, 0); // offset can't be too big
if (subMap == 0) {
return offset;
}
// Make sure subMap isn't too big
DCHECK_EQ(subMap >> kNumSubMapBits_, 0);
// Make sure subMap bits of offset are clear
DCHECK_EQ(offset & (~kSubMapIndexMask_ | kSecondaryMapBit_), 0);
// Set high-order bits to encode which submap this index belongs to
return offset | (subMap << kSubMapIndexShift_) | kSecondaryMapBit_;
}
// Iterator implementation
template <
typename KeyT,
typename ValueT,
typename HashFcn,
typename EqualFcn,
typename Allocator,
typename ProbeFcn,
typename KeyConvertFcn>
template <class ContT, class IterVal, class SubIt>
struct AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>::ahm_iterator
: boost::iterator_facade<
ahm_iterator<ContT, IterVal, SubIt>,
IterVal,
boost::forward_traversal_tag> {
explicit ahm_iterator() : ahm_(nullptr) {}
// Conversion ctor for interoperability between const_iterator and
// iterator. The enable_if<> magic keeps us well-behaved for
// is_convertible<> (v. the iterator_facade documentation).
template <class OtherContT, class OtherVal, class OtherSubIt>
ahm_iterator(
const ahm_iterator<OtherContT, OtherVal, OtherSubIt>& o,
typename std::enable_if<
std::is_convertible<OtherSubIt, SubIt>::value>::type* = nullptr)
: ahm_(o.ahm_), subMap_(o.subMap_), subIt_(o.subIt_) {}
/*
* Returns the unique index that can be used for access directly
* into the data storage.
*/
uint32_t getIndex() const {
CHECK(!isEnd());
return ahm_->encodeIndex(subMap_, subIt_.getIndex());
}
private:
friend class AtomicHashMap;
explicit ahm_iterator(ContT* ahm, uint32_t subMap, const SubIt& subIt)
: ahm_(ahm), subMap_(subMap), subIt_(subIt) {}
friend class boost::iterator_core_access;
void increment() {
CHECK(!isEnd());
++subIt_;
checkAdvanceToNextSubmap();
}
bool equal(const ahm_iterator& other) const {
if (ahm_ != other.ahm_) {
return false;
}
if (isEnd() || other.isEnd()) {
return isEnd() == other.isEnd();
}
return subMap_ == other.subMap_ && subIt_ == other.subIt_;
}
IterVal& dereference() const {
return *subIt_;
}
bool isEnd() const {
return ahm_ == nullptr;
}
void checkAdvanceToNextSubmap() {
if (isEnd()) {
return;
}
SubMap* thisMap = ahm_->subMaps_[subMap_].load(std::memory_order_relaxed);
while (subIt_ == thisMap->end()) {
// This sub iterator is done, advance to next one
if (subMap_ + 1 <
ahm_->numMapsAllocated_.load(std::memory_order_acquire)) {
++subMap_;
thisMap = ahm_->subMaps_[subMap_].load(std::memory_order_relaxed);
subIt_ = thisMap->begin();
} else {
ahm_ = nullptr;
return;
}
}
}
private:
ContT* ahm_;
uint32_t subMap_;
SubIt subIt_;
}; // ahm_iterator
} // namespace folly

Просмотреть файл

@ -1,500 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* AtomicHashMap --
*
* A high-performance concurrent hash map with int32 or int64 keys. Supports
* insert, find(key), findAt(index), erase(key), size, and more. Memory cannot
* be freed or reclaimed by erase. Can grow to a maximum of about 18 times the
* initial capacity, but performance degrades linearly with growth. Can also be
* used as an object store with unique 32-bit references directly into the
* internal storage (retrieved with iterator::getIndex()).
*
* Advantages:
* - High-performance (~2-4x tbb::concurrent_hash_map in heavily
* multi-threaded environments).
* - Efficient memory usage if initial capacity is not over estimated
* (especially for small keys and values).
* - Good fragmentation properties (only allocates in large slabs which can
* be reused with clear() and never move).
* - Can generate unique, long-lived 32-bit references for efficient lookup
* (see findAt()).
*
* Disadvantages:
* - Keys must be native int32 or int64, or explicitly converted.
* - Must be able to specify unique empty, locked, and erased keys
* - Performance degrades linearly as size grows beyond initialization
* capacity.
* - Max size limit of ~18x initial size (dependent on max load factor).
* - Memory is not freed or reclaimed by erase.
*
* Usage and Operation Details:
* Simple performance/memory tradeoff with maxLoadFactor. Higher load factors
* give better memory utilization but probe lengths increase, reducing
* performance.
*
* Implementation and Performance Details:
* AHArray is a fixed size contiguous block of value_type cells. When
* writing a cell, the key is locked while the rest of the record is
* written. Once done, the cell is unlocked by setting the key. find()
* is completely wait-free and doesn't require any non-relaxed atomic
* operations. AHA cannot grow beyond initialization capacity, but is
* faster because of reduced data indirection.
*
* AHMap is a wrapper around AHArray sub-maps that allows growth and provides
* an interface closer to the STL UnorderedAssociativeContainer concept. These
* sub-maps are allocated on the fly and are processed in series, so the more
* there are (from growing past initial capacity), the worse the performance.
*
* Insert returns false if there is a key collision and throws if the max size
* of the map is exceeded.
*
* Benchmark performance with 8 simultaneous threads processing 1 million
* unique <int64, int64> entries on a 4-core, 2.5 GHz machine:
*
* Load Factor Mem Efficiency usec/Insert usec/Find
* 50% 50% 0.19 0.05
* 85% 85% 0.20 0.06
* 90% 90% 0.23 0.08
* 95% 95% 0.27 0.10
*
* See folly/tests/AtomicHashMapTest.cpp for more benchmarks.
*
* @author Spencer Ahrens <sahrens@fb.com>
* @author Jordan DeLong <delong.j@fb.com>
*
*/
#pragma once
#define FOLLY_ATOMICHASHMAP_H_
#include <boost/iterator/iterator_facade.hpp>
#include <boost/noncopyable.hpp>
#include <boost/type_traits/is_convertible.hpp>
#include <atomic>
#include <functional>
#include <stdexcept>
#include <folly/AtomicHashArray.h>
#include <folly/CPortability.h>
#include <folly/Likely.h>
#include <folly/ThreadCachedInt.h>
#include <folly/container/Foreach.h>
#include <folly/hash/Hash.h>
namespace folly {
/*
* AtomicHashMap provides an interface somewhat similar to the
* UnorderedAssociativeContainer concept in C++. This does not
* exactly match this concept (or even the basic Container concept),
* because of some restrictions imposed by our datastructure.
*
* Specific differences (there are quite a few):
*
* - Efficiently thread safe for inserts (main point of this stuff),
* wait-free for lookups.
*
* - You can erase from this container, but the cell containing the key will
* not be free or reclaimed.
*
* - You can erase everything by calling clear() (and you must guarantee only
* one thread can be using the container to do that).
*
* - We aren't DefaultConstructible, CopyConstructible, Assignable, or
* EqualityComparable. (Most of these are probably not something
* you actually want to do with this anyway.)
*
* - We don't support the various bucket functions, rehash(),
* reserve(), or equal_range(). Also no constructors taking
* iterators, although this could change.
*
* - Several insertion functions, notably operator[], are not
* implemented. It is a little too easy to misuse these functions
* with this container, where part of the point is that when an
* insertion happens for a new key, it will atomically have the
* desired value.
*
* - The map has no templated insert() taking an iterator range, but
* we do provide an insert(key, value). The latter seems more
* frequently useful for this container (to avoid sprinkling
* make_pair everywhere), and providing both can lead to some gross
* template error messages.
*
* - The Allocator must not be stateful (a new instance will be spun up for
* each allocation), and its allocate() method must take a raw number of
* bytes.
*
* - KeyT must be a 32 bit or 64 bit atomic integer type, and you must
* define special 'locked' and 'empty' key values in the ctor
*
* - We don't take the Hash function object as an instance in the
* constructor.
*
*/
// Thrown when insertion fails due to running out of space for
// submaps.
struct FOLLY_EXPORT AtomicHashMapFullError : std::runtime_error {
explicit AtomicHashMapFullError()
: std::runtime_error("AtomicHashMap is full") {}
};
template <
class KeyT,
class ValueT,
class HashFcn,
class EqualFcn,
class Allocator,
class ProbeFcn,
class KeyConvertFcn>
class AtomicHashMap : boost::noncopyable {
typedef AtomicHashArray<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
ProbeFcn,
KeyConvertFcn>
SubMap;
public:
typedef KeyT key_type;
typedef ValueT mapped_type;
typedef std::pair<const KeyT, ValueT> value_type;
typedef HashFcn hasher;
typedef EqualFcn key_equal;
typedef KeyConvertFcn key_convert;
typedef value_type* pointer;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef std::ptrdiff_t difference_type;
typedef std::size_t size_type;
typedef typename SubMap::Config Config;
template <class ContT, class IterVal, class SubIt>
struct ahm_iterator;
typedef ahm_iterator<
const AtomicHashMap,
const value_type,
typename SubMap::const_iterator>
const_iterator;
typedef ahm_iterator<AtomicHashMap, value_type, typename SubMap::iterator>
iterator;
public:
const float kGrowthFrac_; // How much to grow when we run out of capacity.
// The constructor takes a finalSizeEst which is the optimal
// number of elements to maximize space utilization and performance,
// and a Config object to specify more advanced options.
explicit AtomicHashMap(size_t finalSizeEst, const Config& c = Config());
~AtomicHashMap() {
const unsigned int numMaps =
numMapsAllocated_.load(std::memory_order_relaxed);
FOR_EACH_RANGE (i, 0, numMaps) {
SubMap* thisMap = subMaps_[i].load(std::memory_order_relaxed);
DCHECK(thisMap);
SubMap::destroy(thisMap);
}
}
key_equal key_eq() const {
return key_equal();
}
hasher hash_function() const {
return hasher();
}
/*
* insert --
*
* Returns a pair with iterator to the element at r.first and
* success. Retrieve the index with ret.first.getIndex().
*
* Does not overwrite on key collision, but returns an iterator to
* the existing element (since this could due to a race with
* another thread, it is often important to check this return
* value).
*
* Allocates new sub maps as the existing ones become full. If
* all sub maps are full, no element is inserted, and
* AtomicHashMapFullError is thrown.
*/
std::pair<iterator, bool> insert(const value_type& r) {
return emplace(r.first, r.second);
}
std::pair<iterator, bool> insert(key_type k, const mapped_type& v) {
return emplace(k, v);
}
std::pair<iterator, bool> insert(value_type&& r) {
return emplace(r.first, std::move(r.second));
}
std::pair<iterator, bool> insert(key_type k, mapped_type&& v) {
return emplace(k, std::move(v));
}
/*
* emplace --
*
* Same contract as insert(), but performs in-place construction
* of the value type using the specified arguments.
*
* Also, like find(), this method optionally allows 'key_in' to have a type
* different from that stored in the table; see find(). If and only if no
* equal key is already present, this method converts 'key_in' to a key of
* type KeyT using the provided LookupKeyToKeyFcn.
*/
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal,
typename LookupKeyToKeyFcn = key_convert,
typename... ArgTs>
std::pair<iterator, bool> emplace(LookupKeyT k, ArgTs&&... vCtorArg);
/*
* find --
*
* Returns the iterator to the element if found, otherwise end().
*
* As an optional feature, the type of the key to look up (LookupKeyT) is
* allowed to be different from the type of keys actually stored (KeyT).
*
* This enables use cases where materializing the key is costly and usually
* redudant, e.g., canonicalizing/interning a set of strings and being able
* to look up by StringPiece. To use this feature, LookupHashFcn must take
* a LookupKeyT, and LookupEqualFcn must take KeyT and LookupKeyT as first
* and second parameter, respectively.
*
* See folly/test/ArrayHashMapTest.cpp for sample usage.
*/
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
iterator find(LookupKeyT k);
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
const_iterator find(LookupKeyT k) const;
/*
* erase --
*
* Erases key k from the map
*
* Returns 1 iff the key is found and erased, and 0 otherwise.
*/
size_type erase(key_type k);
/*
* clear --
*
* Wipes all keys and values from primary map and destroys all secondary
* maps. Primary map remains allocated and thus the memory can be reused
* in place. Not thread safe.
*
*/
void clear();
/*
* size --
*
* Returns the exact size of the map. Note this is not as cheap as typical
* size() implementations because, for each AtomicHashArray in this AHM, we
* need to grab a lock and accumulate the values from all the thread local
* counters. See folly/ThreadCachedInt.h for more details.
*/
size_t size() const;
bool empty() const {
return size() == 0;
}
size_type count(key_type k) const {
return find(k) == end() ? 0 : 1;
}
/*
* findAt --
*
* Returns an iterator into the map.
*
* idx should only be an unmodified value returned by calling getIndex() on
* a valid iterator returned by find() or insert(). If idx is invalid you
* have a bug and the process aborts.
*/
iterator findAt(uint32_t idx) {
SimpleRetT ret = findAtInternal(idx);
DCHECK_LT(ret.i, numSubMaps());
return iterator(
this,
ret.i,
subMaps_[ret.i].load(std::memory_order_relaxed)->makeIter(ret.j));
}
const_iterator findAt(uint32_t idx) const {
return const_cast<AtomicHashMap*>(this)->findAt(idx);
}
// Total capacity - summation of capacities of all submaps.
size_t capacity() const;
// Number of new insertions until current submaps are all at max load factor.
size_t spaceRemaining() const;
void setEntryCountThreadCacheSize(int32_t newSize) {
const int numMaps = numMapsAllocated_.load(std::memory_order_acquire);
for (int i = 0; i < numMaps; ++i) {
SubMap* map = subMaps_[i].load(std::memory_order_relaxed);
map->setEntryCountThreadCacheSize(newSize);
}
}
// Number of sub maps allocated so far to implement this map. The more there
// are, the worse the performance.
int numSubMaps() const {
return numMapsAllocated_.load(std::memory_order_acquire);
}
iterator begin() {
iterator it(this, 0, subMaps_[0].load(std::memory_order_relaxed)->begin());
it.checkAdvanceToNextSubmap();
return it;
}
const_iterator begin() const {
const_iterator it(
this, 0, subMaps_[0].load(std::memory_order_relaxed)->begin());
it.checkAdvanceToNextSubmap();
return it;
}
iterator end() {
return iterator();
}
const_iterator end() const {
return const_iterator();
}
/* Advanced functions for direct access: */
inline uint32_t recToIdx(const value_type& r, bool mayInsert = true) {
SimpleRetT ret =
mayInsert ? insertInternal(r.first, r.second) : findInternal(r.first);
return encodeIndex(ret.i, ret.j);
}
inline uint32_t recToIdx(value_type&& r, bool mayInsert = true) {
SimpleRetT ret = mayInsert ? insertInternal(r.first, std::move(r.second))
: findInternal(r.first);
return encodeIndex(ret.i, ret.j);
}
inline uint32_t
recToIdx(key_type k, const mapped_type& v, bool mayInsert = true) {
SimpleRetT ret = mayInsert ? insertInternal(k, v) : findInternal(k);
return encodeIndex(ret.i, ret.j);
}
inline uint32_t recToIdx(key_type k, mapped_type&& v, bool mayInsert = true) {
SimpleRetT ret =
mayInsert ? insertInternal(k, std::move(v)) : findInternal(k);
return encodeIndex(ret.i, ret.j);
}
inline uint32_t keyToIdx(const KeyT k, bool mayInsert = false) {
return recToIdx(value_type(k), mayInsert);
}
inline const value_type& idxToRec(uint32_t idx) const {
SimpleRetT ret = findAtInternal(idx);
return subMaps_[ret.i].load(std::memory_order_relaxed)->idxToRec(ret.j);
}
/* Private data and helper functions... */
private:
// This limits primary submap size to 2^31 ~= 2 billion, secondary submap
// size to 2^(32 - kNumSubMapBits_ - 1) = 2^27 ~= 130 million, and num subMaps
// to 2^kNumSubMapBits_ = 16.
static const uint32_t kNumSubMapBits_ = 4;
static const uint32_t kSecondaryMapBit_ = 1u << 31; // Highest bit
static const uint32_t kSubMapIndexShift_ = 32 - kNumSubMapBits_ - 1;
static const uint32_t kSubMapIndexMask_ = (1 << kSubMapIndexShift_) - 1;
static const uint32_t kNumSubMaps_ = 1 << kNumSubMapBits_;
static const uintptr_t kLockedPtr_ = 0x88ULL << 48; // invalid pointer
struct SimpleRetT {
uint32_t i;
size_t j;
bool success;
SimpleRetT(uint32_t ii, size_t jj, bool s) : i(ii), j(jj), success(s) {}
SimpleRetT() = default;
};
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal,
typename LookupKeyToKeyFcn = key_convert,
typename... ArgTs>
SimpleRetT insertInternal(LookupKeyT key, ArgTs&&... value);
template <
typename LookupKeyT = key_type,
typename LookupHashFcn = hasher,
typename LookupEqualFcn = key_equal>
SimpleRetT findInternal(const LookupKeyT k) const;
SimpleRetT findAtInternal(uint32_t idx) const;
std::atomic<SubMap*> subMaps_[kNumSubMaps_];
std::atomic<uint32_t> numMapsAllocated_;
inline bool tryLockMap(unsigned int idx) {
SubMap* val = nullptr;
return subMaps_[idx].compare_exchange_strong(
val, (SubMap*)kLockedPtr_, std::memory_order_acquire);
}
static inline uint32_t encodeIndex(uint32_t subMap, uint32_t subMapIdx);
}; // AtomicHashMap
template <
class KeyT,
class ValueT,
class HashFcn = std::hash<KeyT>,
class EqualFcn = std::equal_to<KeyT>,
class Allocator = std::allocator<char>>
using QuadraticProbingAtomicHashMap = AtomicHashMap<
KeyT,
ValueT,
HashFcn,
EqualFcn,
Allocator,
AtomicHashArrayQuadraticProbeFcn>;
} // namespace folly
#include <folly/AtomicHashMap-inl.h>

Просмотреть файл

@ -1,178 +0,0 @@
/*
* Copyright 2014-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <atomic>
#include <cassert>
#include <utility>
namespace folly {
/**
* A very simple atomic single-linked list primitive.
*
* Usage:
*
* class MyClass {
* AtomicIntrusiveLinkedListHook<MyClass> hook_;
* }
*
* AtomicIntrusiveLinkedList<MyClass, &MyClass::hook_> list;
* list.insert(&a);
* list.sweep([] (MyClass* c) { doSomething(c); }
*/
template <class T>
struct AtomicIntrusiveLinkedListHook {
T* next{nullptr};
};
template <class T, AtomicIntrusiveLinkedListHook<T> T::*HookMember>
class AtomicIntrusiveLinkedList {
public:
AtomicIntrusiveLinkedList() {}
AtomicIntrusiveLinkedList(const AtomicIntrusiveLinkedList&) = delete;
AtomicIntrusiveLinkedList& operator=(const AtomicIntrusiveLinkedList&) =
delete;
AtomicIntrusiveLinkedList(AtomicIntrusiveLinkedList&& other) noexcept {
auto tmp = other.head_.load();
other.head_ = head_.load();
head_ = tmp;
}
AtomicIntrusiveLinkedList& operator=(
AtomicIntrusiveLinkedList&& other) noexcept {
auto tmp = other.head_.load();
other.head_ = head_.load();
head_ = tmp;
return *this;
}
/**
* Note: list must be empty on destruction.
*/
~AtomicIntrusiveLinkedList() {
assert(empty());
}
bool empty() const {
return head_.load() == nullptr;
}
/**
* Atomically insert t at the head of the list.
* @return True if the inserted element is the only one in the list
* after the call.
*/
bool insertHead(T* t) {
assert(next(t) == nullptr);
auto oldHead = head_.load(std::memory_order_relaxed);
do {
next(t) = oldHead;
/* oldHead is updated by the call below.
NOTE: we don't use next(t) instead of oldHead directly due to
compiler bugs (GCC prior to 4.8.3 (bug 60272), clang (bug 18899),
MSVC (bug 819819); source:
http://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange */
} while (!head_.compare_exchange_weak(
oldHead, t, std::memory_order_release, std::memory_order_relaxed));
return oldHead == nullptr;
}
/**
* Replaces the head with nullptr,
* and calls func() on the removed elements in the order from tail to head.
* Returns false if the list was empty.
*/
template <typename F>
bool sweepOnce(F&& func) {
if (auto head = head_.exchange(nullptr)) {
auto rhead = reverse(head);
unlinkAll(rhead, std::forward<F>(func));
return true;
}
return false;
}
/**
* Repeatedly replaces the head with nullptr,
* and calls func() on the removed elements in the order from tail to head.
* Stops when the list is empty.
*/
template <typename F>
void sweep(F&& func) {
while (sweepOnce(func)) {
}
}
/**
* Similar to sweep() but calls func() on elements in LIFO order.
*
* func() is called for all elements in the list at the moment
* reverseSweep() is called. Unlike sweep() it does not loop to ensure the
* list is empty at some point after the last invocation. This way callers
* can reason about the ordering: elements inserted since the last call to
* reverseSweep() will be provided in LIFO order.
*
* Example: if elements are inserted in the order 1-2-3, the callback is
* invoked 3-2-1. If the callback moves elements onto a stack, popping off
* the stack will produce the original insertion order 1-2-3.
*/
template <typename F>
void reverseSweep(F&& func) {
// We don't loop like sweep() does because the overall order of callbacks
// would be strand-wise LIFO which is meaningless to callers.
auto head = head_.exchange(nullptr);
unlinkAll(head, std::forward<F>(func));
}
private:
std::atomic<T*> head_{nullptr};
static T*& next(T* t) {
return (t->*HookMember).next;
}
/* Reverses a linked list, returning the pointer to the new head
(old tail) */
static T* reverse(T* head) {
T* rhead = nullptr;
while (head != nullptr) {
auto t = head;
head = next(t);
next(t) = rhead;
rhead = t;
}
return rhead;
}
/* Unlinks all elements in the linked list fragment pointed to by `head',
* calling func() on every element */
template <typename F>
void unlinkAll(T* head, F&& func) {
while (head != nullptr) {
auto t = head;
head = next(t);
next(t) = nullptr;
func(t);
}
}
};
} // namespace folly

Просмотреть файл

@ -1,108 +0,0 @@
/*
* Copyright 2014-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <folly/AtomicIntrusiveLinkedList.h>
#include <folly/Memory.h>
namespace folly {
/**
* A very simple atomic single-linked list primitive.
*
* Usage:
*
* AtomicLinkedList<MyClass> list;
* list.insert(a);
* list.sweep([] (MyClass& c) { doSomething(c); }
*/
template <class T>
class AtomicLinkedList {
public:
AtomicLinkedList() {}
AtomicLinkedList(const AtomicLinkedList&) = delete;
AtomicLinkedList& operator=(const AtomicLinkedList&) = delete;
AtomicLinkedList(AtomicLinkedList&& other) noexcept = default;
AtomicLinkedList& operator=(AtomicLinkedList&& other) = default;
~AtomicLinkedList() {
sweep([](T&&) {});
}
bool empty() const {
return list_.empty();
}
/**
* Atomically insert t at the head of the list.
* @return True if the inserted element is the only one in the list
* after the call.
*/
bool insertHead(T t) {
auto wrapper = std::make_unique<Wrapper>(std::move(t));
return list_.insertHead(wrapper.release());
}
/**
* Repeatedly pops element from head,
* and calls func() on the removed elements in the order from tail to head.
* Stops when the list is empty.
*/
template <typename F>
void sweep(F&& func) {
list_.sweep([&](Wrapper* wrapperPtr) mutable {
std::unique_ptr<Wrapper> wrapper(wrapperPtr);
func(std::move(wrapper->data));
});
}
/**
* Similar to sweep() but calls func() on elements in LIFO order.
*
* func() is called for all elements in the list at the moment
* reverseSweep() is called. Unlike sweep() it does not loop to ensure the
* list is empty at some point after the last invocation. This way callers
* can reason about the ordering: elements inserted since the last call to
* reverseSweep() will be provided in LIFO order.
*
* Example: if elements are inserted in the order 1-2-3, the callback is
* invoked 3-2-1. If the callback moves elements onto a stack, popping off
* the stack will produce the original insertion order 1-2-3.
*/
template <typename F>
void reverseSweep(F&& func) {
list_.reverseSweep([&](Wrapper* wrapperPtr) mutable {
std::unique_ptr<Wrapper> wrapper(wrapperPtr);
func(std::move(wrapper->data));
});
}
private:
struct Wrapper {
explicit Wrapper(T&& t) : data(std::move(t)) {}
AtomicIntrusiveLinkedListHook<Wrapper> hook;
T data;
};
AtomicIntrusiveLinkedList<Wrapper, &Wrapper::hook> list_;
};
} // namespace folly

Просмотреть файл

@ -1,515 +0,0 @@
/*
* Copyright 2013-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <atomic>
#include <cstdint>
#include <functional>
#include <limits>
#include <stdexcept>
#include <system_error>
#include <type_traits>
#include <boost/type_traits/has_trivial_destructor.hpp>
#include <folly/Conv.h>
#include <folly/Likely.h>
#include <folly/Random.h>
#include <folly/detail/AtomicUnorderedMapUtils.h>
#include <folly/lang/Bits.h>
#include <folly/portability/SysMman.h>
#include <folly/portability/Unistd.h>
namespace folly {
/// You're probably reading this because you are looking for an
/// AtomicUnorderedMap<K,V> that is fully general, highly concurrent (for
/// reads, writes, and iteration), and makes no performance compromises.
/// We haven't figured that one out yet. What you will find here is a
/// hash table implementation that sacrifices generality so that it can
/// give you all of the other things.
///
/// LIMITATIONS:
///
/// * Insert only (*) - the only write operation supported directly by
/// AtomicUnorderedInsertMap is findOrConstruct. There is a (*) because
/// values aren't moved, so you can roll your own concurrency control for
/// in-place updates of values (see MutableData and MutableAtom below),
/// but the hash table itself doesn't help you.
///
/// * No resizing - you must specify the capacity up front, and once
/// the hash map gets full you won't be able to insert. Insert
/// performance will degrade once the load factor is high. Insert is
/// O(1/(1-actual_load_factor)). Note that this is a pretty strong
/// limitation, because you can't remove existing keys.
///
/// * 2^30 maximum default capacity - by default AtomicUnorderedInsertMap
/// uses uint32_t internal indexes (and steals 2 bits), limiting you
/// to about a billion entries. If you need more you can fill in all
/// of the template params so you change IndexType to uint64_t, or you
/// can use AtomicUnorderedInsertMap64. 64-bit indexes will increase
/// the space over of the map, of course.
///
/// WHAT YOU GET IN EXCHANGE:
///
/// * Arbitrary key and value types - any K and V that can be used in a
/// std::unordered_map can be used here. In fact, the key and value
/// types don't even have to be copyable or moveable!
///
/// * Keys and values in the map won't be moved - it is safe to keep
/// pointers or references to the keys and values in the map, because
/// they are never moved or destroyed (until the map itself is destroyed).
///
/// * Iterators are never invalidated - writes don't invalidate iterators,
/// so you can scan and insert in parallel.
///
/// * Fast wait-free reads - reads are usually only a single cache miss,
/// even when the hash table is very large. Wait-freedom means that
/// you won't see latency outliers even in the face of concurrent writes.
///
/// * Lock-free insert - writes proceed in parallel. If a thread in the
/// middle of a write is unlucky and gets suspended, it doesn't block
/// anybody else.
///
/// COMMENTS ON INSERT-ONLY
///
/// This map provides wait-free linearizable reads and lock-free
/// linearizable inserts. Inserted values won't be moved, but no
/// concurrency control is provided for safely updating them. To remind
/// you of that fact they are only provided in const form. This is the
/// only simple safe thing to do while preserving something like the normal
/// std::map iteration form, which requires that iteration be exposed
/// via std::pair (and prevents encapsulation of access to the value).
///
/// There are a couple of reasonable policies for doing in-place
/// concurrency control on the values. I am hoping that the policy can
/// be injected via the value type or an extra template param, to keep
/// the core AtomicUnorderedInsertMap insert-only:
///
/// CONST: this is the currently implemented strategy, which is simple,
/// performant, and not that expressive. You can always put in a value
/// with a mutable field (see MutableAtom below), but that doesn't look
/// as pretty as it should.
///
/// ATOMIC: for integers and integer-size trivially copyable structs
/// (via an adapter like tao/queues/AtomicStruct) the value can be a
/// std::atomic and read and written atomically.
///
/// SEQ-LOCK: attach a counter incremented before and after write.
/// Writers serialize by using CAS to make an even->odd transition,
/// then odd->even after the write. Readers grab the value with memcpy,
/// checking sequence value before and after. Readers retry until they
/// see an even sequence number that doesn't change. This works for
/// larger structs, but still requires memcpy to be equivalent to copy
/// assignment, and it is no longer lock-free. It scales very well,
/// because the readers are still invisible (no cache line writes).
///
/// LOCK: folly's SharedMutex would be a good choice here.
///
/// MEMORY ALLOCATION
///
/// Underlying memory is allocated as a big anonymous mmap chunk, which
/// might be cheaper than calloc() and is certainly not more expensive
/// for large maps. If the SkipKeyValueDeletion template param is true
/// then deletion of the map consists of unmapping the backing memory,
/// which is much faster than destructing all of the keys and values.
/// Feel free to override if std::is_trivial_destructor isn't recognizing
/// the triviality of your destructors.
template <
typename Key,
typename Value,
typename Hash = std::hash<Key>,
typename KeyEqual = std::equal_to<Key>,
bool SkipKeyValueDeletion =
(boost::has_trivial_destructor<Key>::value &&
boost::has_trivial_destructor<Value>::value),
template <typename> class Atom = std::atomic,
typename IndexType = uint32_t,
typename Allocator = folly::detail::MMapAlloc>
struct AtomicUnorderedInsertMap {
typedef Key key_type;
typedef Value mapped_type;
typedef std::pair<Key, Value> value_type;
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
typedef Hash hasher;
typedef KeyEqual key_equal;
typedef const value_type& const_reference;
typedef struct ConstIterator {
ConstIterator(const AtomicUnorderedInsertMap& owner, IndexType slot)
: owner_(owner), slot_(slot) {}
ConstIterator(const ConstIterator&) = default;
ConstIterator& operator=(const ConstIterator&) = default;
const value_type& operator*() const {
return owner_.slots_[slot_].keyValue();
}
const value_type* operator->() const {
return &owner_.slots_[slot_].keyValue();
}
// pre-increment
const ConstIterator& operator++() {
while (slot_ > 0) {
--slot_;
if (owner_.slots_[slot_].state() == LINKED) {
break;
}
}
return *this;
}
// post-increment
ConstIterator operator++(int /* dummy */) {
auto prev = *this;
++*this;
return prev;
}
bool operator==(const ConstIterator& rhs) const {
return slot_ == rhs.slot_;
}
bool operator!=(const ConstIterator& rhs) const {
return !(*this == rhs);
}
private:
const AtomicUnorderedInsertMap& owner_;
IndexType slot_;
} const_iterator;
friend ConstIterator;
/// Constructs a map that will support the insertion of maxSize key-value
/// pairs without exceeding the max load factor. Load factors of greater
/// than 1 are not supported, and once the actual load factor of the
/// map approaches 1 the insert performance will suffer. The capacity
/// is limited to 2^30 (about a billion) for the default IndexType,
/// beyond which we will throw invalid_argument.
explicit AtomicUnorderedInsertMap(
size_t maxSize,
float maxLoadFactor = 0.8f,
const Allocator& alloc = Allocator())
: allocator_(alloc) {
size_t capacity = size_t(maxSize / std::min(1.0f, maxLoadFactor) + 128);
size_t avail = size_t{1} << (8 * sizeof(IndexType) - 2);
if (capacity > avail && maxSize < avail) {
// we'll do our best
capacity = avail;
}
if (capacity < maxSize || capacity > avail) {
throw std::invalid_argument(
"AtomicUnorderedInsertMap capacity must fit in IndexType with 2 bits "
"left over");
}
numSlots_ = capacity;
slotMask_ = folly::nextPowTwo(capacity * 4) - 1;
mmapRequested_ = sizeof(Slot) * capacity;
slots_ = reinterpret_cast<Slot*>(allocator_.allocate(mmapRequested_));
zeroFillSlots();
// mark the zero-th slot as in-use but not valid, since that happens
// to be our nil value
slots_[0].stateUpdate(EMPTY, CONSTRUCTING);
}
~AtomicUnorderedInsertMap() {
if (!SkipKeyValueDeletion) {
for (size_t i = 1; i < numSlots_; ++i) {
slots_[i].~Slot();
}
}
allocator_.deallocate(reinterpret_cast<char*>(slots_), mmapRequested_);
}
/// Searches for the key, returning (iter,false) if it is found.
/// If it is not found calls the functor Func with a void* argument
/// that is raw storage suitable for placement construction of a Value
/// (see raw_value_type), then returns (iter,true). May call Func and
/// then return (iter,false) if there are other concurrent writes, in
/// which case the newly constructed value will be immediately destroyed.
///
/// This function does not block other readers or writers. If there
/// are other concurrent writes, many parallel calls to func may happen
/// and only the first one to complete will win. The values constructed
/// by the other calls to func will be destroyed.
///
/// Usage:
///
/// AtomicUnorderedInsertMap<std::string,std::string> memo;
///
/// auto value = memo.findOrConstruct(key, [=](void* raw) {
/// new (raw) std::string(computation(key));
/// })->first;
template <typename Func>
std::pair<const_iterator, bool> findOrConstruct(const Key& key, Func&& func) {
auto const slot = keyToSlotIdx(key);
auto prev = slots_[slot].headAndState_.load(std::memory_order_acquire);
auto existing = find(key, slot);
if (existing != 0) {
return std::make_pair(ConstIterator(*this, existing), false);
}
auto idx = allocateNear(slot);
new (&slots_[idx].keyValue().first) Key(key);
func(static_cast<void*>(&slots_[idx].keyValue().second));
while (true) {
slots_[idx].next_ = prev >> 2;
// we can merge the head update and the CONSTRUCTING -> LINKED update
// into a single CAS if slot == idx (which should happen often)
auto after = idx << 2;
if (slot == idx) {
after += LINKED;
} else {
after += (prev & 3);
}
if (slots_[slot].headAndState_.compare_exchange_strong(prev, after)) {
// success
if (idx != slot) {
slots_[idx].stateUpdate(CONSTRUCTING, LINKED);
}
return std::make_pair(ConstIterator(*this, idx), true);
}
// compare_exchange_strong updates its first arg on failure, so
// there is no need to reread prev
existing = find(key, slot);
if (existing != 0) {
// our allocated key and value are no longer needed
slots_[idx].keyValue().first.~Key();
slots_[idx].keyValue().second.~Value();
slots_[idx].stateUpdate(CONSTRUCTING, EMPTY);
return std::make_pair(ConstIterator(*this, existing), false);
}
}
}
/// This isn't really emplace, but it is what we need to test.
/// Eventually we can duplicate all of the std::pair constructor
/// forms, including a recursive tuple forwarding template
/// http://functionalcpp.wordpress.com/2013/08/28/tuple-forwarding/).
template <class K, class V>
std::pair<const_iterator, bool> emplace(const K& key, V&& value) {
return findOrConstruct(
key, [&](void* raw) { new (raw) Value(std::forward<V>(value)); });
}
const_iterator find(const Key& key) const {
return ConstIterator(*this, find(key, keyToSlotIdx(key)));
}
const_iterator cbegin() const {
IndexType slot = numSlots_ - 1;
while (slot > 0 && slots_[slot].state() != LINKED) {
--slot;
}
return ConstIterator(*this, slot);
}
const_iterator cend() const {
return ConstIterator(*this, 0);
}
private:
enum : IndexType {
kMaxAllocationTries = 1000, // after this we throw
};
enum BucketState : IndexType {
EMPTY = 0,
CONSTRUCTING = 1,
LINKED = 2,
};
/// Lock-free insertion is easiest by prepending to collision chains.
/// A large chaining hash table takes two cache misses instead of
/// one, however. Our solution is to colocate the bucket storage and
/// the head storage, so that even though we are traversing chains we
/// are likely to stay within the same cache line. Just make sure to
/// traverse head before looking at any keys. This strategy gives us
/// 32 bit pointers and fast iteration.
struct Slot {
/// The bottom two bits are the BucketState, the rest is the index
/// of the first bucket for the chain whose keys map to this slot.
/// When things are going well the head usually links to this slot,
/// but that doesn't always have to happen.
Atom<IndexType> headAndState_;
/// The next bucket in the chain
IndexType next_;
/// Key and Value
typename std::aligned_storage<sizeof(value_type), alignof(value_type)>::type
raw_;
~Slot() {
auto s = state();
assert(s == EMPTY || s == LINKED);
if (s == LINKED) {
keyValue().first.~Key();
keyValue().second.~Value();
}
}
BucketState state() const {
return BucketState(headAndState_.load(std::memory_order_acquire) & 3);
}
void stateUpdate(BucketState before, BucketState after) {
assert(state() == before);
headAndState_ += (after - before);
}
value_type& keyValue() {
assert(state() != EMPTY);
return *static_cast<value_type*>(static_cast<void*>(&raw_));
}
const value_type& keyValue() const {
assert(state() != EMPTY);
return *static_cast<const value_type*>(static_cast<const void*>(&raw_));
}
};
// We manually manage the slot memory so we can bypass initialization
// (by getting a zero-filled mmap chunk) and optionally destruction of
// the slots
size_t mmapRequested_;
size_t numSlots_;
/// tricky, see keyToSlodIdx
size_t slotMask_;
Allocator allocator_;
Slot* slots_;
IndexType keyToSlotIdx(const Key& key) const {
size_t h = hasher()(key);
h &= slotMask_;
while (h >= numSlots_) {
h -= numSlots_;
}
return h;
}
IndexType find(const Key& key, IndexType slot) const {
KeyEqual ke = {};
auto hs = slots_[slot].headAndState_.load(std::memory_order_acquire);
for (slot = hs >> 2; slot != 0; slot = slots_[slot].next_) {
if (ke(key, slots_[slot].keyValue().first)) {
return slot;
}
}
return 0;
}
/// Allocates a slot and returns its index. Tries to put it near
/// slots_[start].
IndexType allocateNear(IndexType start) {
for (IndexType tries = 0; tries < kMaxAllocationTries; ++tries) {
auto slot = allocationAttempt(start, tries);
auto prev = slots_[slot].headAndState_.load(std::memory_order_acquire);
if ((prev & 3) == EMPTY &&
slots_[slot].headAndState_.compare_exchange_strong(
prev, prev + CONSTRUCTING - EMPTY)) {
return slot;
}
}
throw std::bad_alloc();
}
/// Returns the slot we should attempt to allocate after tries failed
/// tries, starting from the specified slot. This is pulled out so we
/// can specialize it differently during deterministic testing
IndexType allocationAttempt(IndexType start, IndexType tries) const {
if (LIKELY(tries < 8 && start + tries < numSlots_)) {
return IndexType(start + tries);
} else {
IndexType rv;
if (sizeof(IndexType) <= 4) {
rv = IndexType(folly::Random::rand32(numSlots_));
} else {
rv = IndexType(folly::Random::rand64(numSlots_));
}
assert(rv < numSlots_);
return rv;
}
}
void zeroFillSlots() {
using folly::detail::GivesZeroFilledMemory;
if (!GivesZeroFilledMemory<Allocator>::value) {
memset(slots_, 0, mmapRequested_);
}
}
};
/// AtomicUnorderedInsertMap64 is just a type alias that makes it easier
/// to select a 64 bit slot index type. Use this if you need a capacity
/// bigger than 2^30 (about a billion). This increases memory overheads,
/// obviously.
template <
typename Key,
typename Value,
typename Hash = std::hash<Key>,
typename KeyEqual = std::equal_to<Key>,
bool SkipKeyValueDeletion =
(boost::has_trivial_destructor<Key>::value &&
boost::has_trivial_destructor<Value>::value),
template <typename> class Atom = std::atomic,
typename Allocator = folly::detail::MMapAlloc>
using AtomicUnorderedInsertMap64 = AtomicUnorderedInsertMap<
Key,
Value,
Hash,
KeyEqual,
SkipKeyValueDeletion,
Atom,
uint64_t,
Allocator>;
/// MutableAtom is a tiny wrapper than gives you the option of atomically
/// updating values inserted into an AtomicUnorderedInsertMap<K,
/// MutableAtom<V>>. This relies on AtomicUnorderedInsertMap's guarantee
/// that it doesn't move values.
template <typename T, template <typename> class Atom = std::atomic>
struct MutableAtom {
mutable Atom<T> data;
explicit MutableAtom(const T& init) : data(init) {}
};
/// MutableData is a tiny wrapper than gives you the option of using an
/// external concurrency control mechanism to updating values inserted
/// into an AtomicUnorderedInsertMap.
template <typename T>
struct MutableData {
mutable T data;
explicit MutableData(const T& init) : data(init) {}
};
} // namespace folly

Просмотреть файл

@ -1,495 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// @author Andrei Alexandrescu (andrei.alexandrescu@fb.com)
#include <folly/Benchmark.h>
#include <algorithm>
#include <cmath>
#include <cstring>
#include <iostream>
#include <limits>
#include <map>
#include <memory>
#include <utility>
#include <vector>
#include <boost/regex.hpp>
#include <folly/MapUtil.h>
#include <folly/String.h>
#include <folly/container/Foreach.h>
#include <folly/json.h>
using namespace std;
DEFINE_bool(benchmark, false, "Run benchmarks.");
DEFINE_bool(json, false, "Output in JSON format.");
DEFINE_bool(json_verbose, false, "Output in verbose JSON format.");
DEFINE_string(
bm_regex,
"",
"Only benchmarks whose names match this regex will be run.");
DEFINE_int64(
bm_min_usec,
100,
"Minimum # of microseconds we'll accept for each benchmark.");
DEFINE_int32(
bm_min_iters,
1,
"Minimum # of iterations we'll try for each benchmark.");
DEFINE_int64(
bm_max_iters,
1 << 30,
"Maximum # of iterations we'll try for each benchmark.");
DEFINE_int32(
bm_max_secs,
1,
"Maximum # of seconds we'll spend on each benchmark.");
namespace folly {
std::chrono::high_resolution_clock::duration BenchmarkSuspender::timeSpent;
typedef function<detail::TimeIterPair(unsigned int)> BenchmarkFun;
vector<detail::BenchmarkRegistration>& benchmarks() {
static vector<detail::BenchmarkRegistration> _benchmarks;
return _benchmarks;
}
#define FB_FOLLY_GLOBAL_BENCHMARK_BASELINE fbFollyGlobalBenchmarkBaseline
#define FB_STRINGIZE_X2(x) FB_STRINGIZE(x)
// Add the global baseline
BENCHMARK(FB_FOLLY_GLOBAL_BENCHMARK_BASELINE) {
#ifdef _MSC_VER
_ReadWriteBarrier();
#else
asm volatile("");
#endif
}
size_t getGlobalBenchmarkBaselineIndex() {
const char* global = FB_STRINGIZE_X2(FB_FOLLY_GLOBAL_BENCHMARK_BASELINE);
auto it = std::find_if(
benchmarks().begin(),
benchmarks().end(),
[global](const detail::BenchmarkRegistration& v) {
return v.name == global;
});
CHECK(it != benchmarks().end());
return size_t(std::distance(benchmarks().begin(), it));
}
#undef FB_STRINGIZE_X2
#undef FB_FOLLY_GLOBAL_BENCHMARK_BASELINE
void detail::addBenchmarkImpl(
const char* file,
const char* name,
BenchmarkFun fun) {
benchmarks().push_back({file, name, std::move(fun)});
}
/**
* Given a bunch of benchmark samples, estimate the actual run time.
*/
static double estimateTime(double* begin, double* end) {
assert(begin < end);
// Current state of the art: get the minimum. After some
// experimentation, it seems taking the minimum is the best.
return *min_element(begin, end);
}
static double runBenchmarkGetNSPerIteration(
const BenchmarkFun& fun,
const double globalBaseline) {
using std::chrono::duration_cast;
using std::chrono::high_resolution_clock;
using std::chrono::microseconds;
using std::chrono::nanoseconds;
using std::chrono::seconds;
// They key here is accuracy; too low numbers means the accuracy was
// coarse. We up the ante until we get to at least minNanoseconds
// timings.
static_assert(
std::is_same<high_resolution_clock::duration, nanoseconds>::value,
"High resolution clock must be nanosecond resolution.");
// We choose a minimum minimum (sic) of 100,000 nanoseconds, but if
// the clock resolution is worse than that, it will be larger. In
// essence we're aiming at making the quantization noise 0.01%.
static const auto minNanoseconds = std::max<nanoseconds>(
nanoseconds(100000), microseconds(FLAGS_bm_min_usec));
// We do measurements in several epochs and take the minimum, to
// account for jitter.
static const unsigned int epochs = 1000;
// We establish a total time budget as we don't want a measurement
// to take too long. This will curtail the number of actual epochs.
const auto timeBudget = seconds(FLAGS_bm_max_secs);
auto global = high_resolution_clock::now();
double epochResults[epochs] = {0};
size_t actualEpochs = 0;
for (; actualEpochs < epochs; ++actualEpochs) {
const auto maxIters = uint32_t(FLAGS_bm_max_iters);
for (auto n = uint32_t(FLAGS_bm_min_iters); n < maxIters; n *= 2) {
auto const nsecsAndIter = fun(static_cast<unsigned int>(n));
if (nsecsAndIter.first < minNanoseconds) {
continue;
}
// We got an accurate enough timing, done. But only save if
// smaller than the current result.
auto nsecs = duration_cast<nanoseconds>(nsecsAndIter.first).count();
epochResults[actualEpochs] =
max(0.0, double(nsecs) / nsecsAndIter.second - globalBaseline);
// Done with the current epoch, we got a meaningful timing.
break;
}
auto now = high_resolution_clock::now();
if (now - global >= timeBudget) {
// No more time budget available.
++actualEpochs;
break;
}
}
// If the benchmark was basically drowned in baseline noise, it's
// possible it became negative.
return max(0.0, estimateTime(epochResults, epochResults + actualEpochs));
}
struct ScaleInfo {
double boundary;
const char* suffix;
};
static const ScaleInfo kTimeSuffixes[]{
{365.25 * 24 * 3600, "years"},
{24 * 3600, "days"},
{3600, "hr"},
{60, "min"},
{1, "s"},
{1E-3, "ms"},
{1E-6, "us"},
{1E-9, "ns"},
{1E-12, "ps"},
{1E-15, "fs"},
{0, nullptr},
};
static const ScaleInfo kMetricSuffixes[]{
{1E24, "Y"}, // yotta
{1E21, "Z"}, // zetta
{1E18, "X"}, // "exa" written with suffix 'X' so as to not create
// confusion with scientific notation
{1E15, "P"}, // peta
{1E12, "T"}, // terra
{1E9, "G"}, // giga
{1E6, "M"}, // mega
{1E3, "K"}, // kilo
{1, ""},
{1E-3, "m"}, // milli
{1E-6, "u"}, // micro
{1E-9, "n"}, // nano
{1E-12, "p"}, // pico
{1E-15, "f"}, // femto
{1E-18, "a"}, // atto
{1E-21, "z"}, // zepto
{1E-24, "y"}, // yocto
{0, nullptr},
};
static string
humanReadable(double n, unsigned int decimals, const ScaleInfo* scales) {
if (std::isinf(n) || std::isnan(n)) {
return folly::to<string>(n);
}
const double absValue = fabs(n);
const ScaleInfo* scale = scales;
while (absValue < scale[0].boundary && scale[1].suffix != nullptr) {
++scale;
}
const double scaledValue = n / scale->boundary;
return stringPrintf("%.*f%s", decimals, scaledValue, scale->suffix);
}
static string readableTime(double n, unsigned int decimals) {
return humanReadable(n, decimals, kTimeSuffixes);
}
static string metricReadable(double n, unsigned int decimals) {
return humanReadable(n, decimals, kMetricSuffixes);
}
namespace {
class BenchmarkResultsPrinter {
public:
static constexpr unsigned int columns{76};
double baselineNsPerIter{numeric_limits<double>::max()};
string lastFile;
void separator(char pad) {
puts(string(columns, pad).c_str());
}
void header(const string& file) {
separator('=');
printf("%-*srelative time/iter iters/s\n", columns - 28, file.c_str());
separator('=');
}
void print(const vector<detail::BenchmarkResult>& data) {
for (auto& datum : data) {
auto file = datum.file;
if (file != lastFile) {
// New file starting
header(file);
lastFile = file;
}
string s = datum.name;
if (s == "-") {
separator('-');
continue;
}
bool useBaseline /* = void */;
if (s[0] == '%') {
s.erase(0, 1);
useBaseline = true;
} else {
baselineNsPerIter = datum.timeInNs;
useBaseline = false;
}
s.resize(columns - 29, ' ');
auto nsPerIter = datum.timeInNs;
auto secPerIter = nsPerIter / 1E9;
auto itersPerSec = (secPerIter == 0)
? std::numeric_limits<double>::infinity()
: (1 / secPerIter);
if (!useBaseline) {
// Print without baseline
printf(
"%*s %9s %7s\n",
static_cast<int>(s.size()),
s.c_str(),
readableTime(secPerIter, 2).c_str(),
metricReadable(itersPerSec, 2).c_str());
} else {
// Print with baseline
auto rel = baselineNsPerIter / nsPerIter * 100.0;
printf(
"%*s %7.2f%% %9s %7s\n",
static_cast<int>(s.size()),
s.c_str(),
rel,
readableTime(secPerIter, 2).c_str(),
metricReadable(itersPerSec, 2).c_str());
}
}
}
};
} // namespace
static void printBenchmarkResultsAsJson(
const vector<detail::BenchmarkResult>& data) {
dynamic d = dynamic::object;
for (auto& datum : data) {
d[datum.name] = datum.timeInNs * 1000.;
}
printf("%s\n", toPrettyJson(d).c_str());
}
static void printBenchmarkResultsAsVerboseJson(
const vector<detail::BenchmarkResult>& data) {
dynamic d;
benchmarkResultsToDynamic(data, d);
printf("%s\n", toPrettyJson(d).c_str());
}
static void printBenchmarkResults(const vector<detail::BenchmarkResult>& data) {
if (FLAGS_json_verbose) {
printBenchmarkResultsAsVerboseJson(data);
return;
} else if (FLAGS_json) {
printBenchmarkResultsAsJson(data);
return;
}
CHECK(FLAGS_json_verbose || FLAGS_json) << "Cannot print benchmark results";
}
void benchmarkResultsToDynamic(
const vector<detail::BenchmarkResult>& data,
dynamic& out) {
out = dynamic::array;
for (auto& datum : data) {
out.push_back(dynamic::array(datum.file, datum.name, datum.timeInNs));
}
}
void benchmarkResultsFromDynamic(
const dynamic& d,
vector<detail::BenchmarkResult>& results) {
for (auto& datum : d) {
results.push_back(
{datum[0].asString(), datum[1].asString(), datum[2].asDouble()});
}
}
static pair<StringPiece, StringPiece> resultKey(
const detail::BenchmarkResult& result) {
return pair<StringPiece, StringPiece>(result.file, result.name);
}
void printResultComparison(
const vector<detail::BenchmarkResult>& base,
const vector<detail::BenchmarkResult>& test) {
map<pair<StringPiece, StringPiece>, double> baselines;
for (auto& baseResult : base) {
baselines[resultKey(baseResult)] = baseResult.timeInNs;
}
//
// Width available
static const unsigned int columns = 76;
// Compute the longest benchmark name
size_t longestName = 0;
for (auto& datum : test) {
longestName = max(longestName, datum.name.size());
}
// Print a horizontal rule
auto separator = [&](char pad) { puts(string(columns, pad).c_str()); };
// Print header for a file
auto header = [&](const string& file) {
separator('=');
printf("%-*srelative time/iter iters/s\n", columns - 28, file.c_str());
separator('=');
};
string lastFile;
for (auto& datum : test) {
folly::Optional<double> baseline =
folly::get_optional(baselines, resultKey(datum));
auto file = datum.file;
if (file != lastFile) {
// New file starting
header(file);
lastFile = file;
}
string s = datum.name;
if (s == "-") {
separator('-');
continue;
}
if (s[0] == '%') {
s.erase(0, 1);
}
s.resize(columns - 29, ' ');
auto nsPerIter = datum.timeInNs;
auto secPerIter = nsPerIter / 1E9;
auto itersPerSec = (secPerIter == 0)
? std::numeric_limits<double>::infinity()
: (1 / secPerIter);
if (!baseline) {
// Print without baseline
printf(
"%*s %9s %7s\n",
static_cast<int>(s.size()),
s.c_str(),
readableTime(secPerIter, 2).c_str(),
metricReadable(itersPerSec, 2).c_str());
} else {
// Print with baseline
auto rel = *baseline / nsPerIter * 100.0;
printf(
"%*s %7.2f%% %9s %7s\n",
static_cast<int>(s.size()),
s.c_str(),
rel,
readableTime(secPerIter, 2).c_str(),
metricReadable(itersPerSec, 2).c_str());
}
}
separator('=');
}
void runBenchmarks() {
CHECK(!benchmarks().empty());
vector<detail::BenchmarkResult> results;
results.reserve(benchmarks().size() - 1);
std::unique_ptr<boost::regex> bmRegex;
if (!FLAGS_bm_regex.empty()) {
bmRegex = std::make_unique<boost::regex>(FLAGS_bm_regex);
}
// PLEASE KEEP QUIET. MEASUREMENTS IN PROGRESS.
size_t baselineIndex = getGlobalBenchmarkBaselineIndex();
auto const globalBaseline =
runBenchmarkGetNSPerIteration(benchmarks()[baselineIndex].func, 0);
auto printer = BenchmarkResultsPrinter{};
FOR_EACH_RANGE (i, 0, benchmarks().size()) {
if (i == baselineIndex) {
continue;
}
double elapsed = 0.0;
auto& bm = benchmarks()[i];
if (bm.name != "-") { // skip separators
if (bmRegex && !boost::regex_search(bm.name, *bmRegex)) {
continue;
}
elapsed = runBenchmarkGetNSPerIteration(bm.func, globalBaseline);
}
if (!FLAGS_json_verbose && !FLAGS_json) {
printer.print({{bm.file, bm.name, elapsed}});
} else {
results.push_back({bm.file, bm.name, elapsed});
}
}
// PLEASE MAKE NOISE. MEASUREMENTS DONE.
if (FLAGS_json_verbose || FLAGS_json) {
printBenchmarkResults(results);
} else {
printer.separator('=');
}
}
} // namespace folly

Просмотреть файл

@ -1,579 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <folly/Portability.h>
#include <folly/Preprocessor.h> // for FB_ANONYMOUS_VARIABLE
#include <folly/ScopeGuard.h>
#include <folly/Traits.h>
#include <folly/functional/Invoke.h>
#include <folly/portability/GFlags.h>
#include <cassert>
#include <chrono>
#include <functional>
#include <limits>
#include <type_traits>
#include <boost/function_types/function_arity.hpp>
#include <glog/logging.h>
DECLARE_bool(benchmark);
namespace folly {
/**
* Runs all benchmarks defined. Usually put in main().
*/
void runBenchmarks();
/**
* Runs all benchmarks defined if and only if the --benchmark flag has
* been passed to the program. Usually put in main().
*/
inline bool runBenchmarksOnFlag() {
if (FLAGS_benchmark) {
runBenchmarks();
}
return FLAGS_benchmark;
}
namespace detail {
using TimeIterPair =
std::pair<std::chrono::high_resolution_clock::duration, unsigned int>;
using BenchmarkFun = std::function<detail::TimeIterPair(unsigned int)>;
struct BenchmarkRegistration {
std::string file;
std::string name;
BenchmarkFun func;
};
struct BenchmarkResult {
std::string file;
std::string name;
double timeInNs;
};
/**
* Adds a benchmark wrapped in a std::function. Only used
* internally. Pass by value is intentional.
*/
void addBenchmarkImpl(
const char* file,
const char* name,
std::function<TimeIterPair(unsigned int)>);
} // namespace detail
/**
* Supporting type for BENCHMARK_SUSPEND defined below.
*/
struct BenchmarkSuspender {
using Clock = std::chrono::high_resolution_clock;
using TimePoint = Clock::time_point;
using Duration = Clock::duration;
BenchmarkSuspender() {
start = Clock::now();
}
BenchmarkSuspender(const BenchmarkSuspender&) = delete;
BenchmarkSuspender(BenchmarkSuspender&& rhs) noexcept {
start = rhs.start;
rhs.start = {};
}
BenchmarkSuspender& operator=(const BenchmarkSuspender&) = delete;
BenchmarkSuspender& operator=(BenchmarkSuspender&& rhs) {
if (start != TimePoint{}) {
tally();
}
start = rhs.start;
rhs.start = {};
return *this;
}
~BenchmarkSuspender() {
if (start != TimePoint{}) {
tally();
}
}
void dismiss() {
assert(start != TimePoint{});
tally();
start = {};
}
void rehire() {
assert(start == TimePoint{});
start = Clock::now();
}
template <class F>
auto dismissing(F f) -> invoke_result_t<F> {
SCOPE_EXIT {
rehire();
};
dismiss();
return f();
}
/**
* This is for use inside of if-conditions, used in BENCHMARK macros.
* If-conditions bypass the explicit on operator bool.
*/
explicit operator bool() const {
return false;
}
/**
* Accumulates time spent outside benchmark.
*/
static Duration timeSpent;
private:
void tally() {
auto end = Clock::now();
timeSpent += end - start;
start = end;
}
TimePoint start;
};
/**
* Adds a benchmark. Usually not called directly but instead through
* the macro BENCHMARK defined below. The lambda function involved
* must take exactly one parameter of type unsigned, and the benchmark
* uses it with counter semantics (iteration occurs inside the
* function).
*/
template <typename Lambda>
typename std::enable_if<
boost::function_types::function_arity<
decltype(&Lambda::operator())>::value == 2>::type
addBenchmark(const char* file, const char* name, Lambda&& lambda) {
auto execute = [=](unsigned int times) {
BenchmarkSuspender::timeSpent = {};
unsigned int niter;
// CORE MEASUREMENT STARTS
auto start = std::chrono::high_resolution_clock::now();
niter = lambda(times);
auto end = std::chrono::high_resolution_clock::now();
// CORE MEASUREMENT ENDS
return detail::TimeIterPair(
(end - start) - BenchmarkSuspender::timeSpent, niter);
};
detail::addBenchmarkImpl(
file, name, std::function<detail::TimeIterPair(unsigned int)>(execute));
}
/**
* Adds a benchmark. Usually not called directly but instead through
* the macro BENCHMARK defined below. The lambda function involved
* must take zero parameters, and the benchmark calls it repeatedly
* (iteration occurs outside the function).
*/
template <typename Lambda>
typename std::enable_if<
boost::function_types::function_arity<
decltype(&Lambda::operator())>::value == 1>::type
addBenchmark(const char* file, const char* name, Lambda&& lambda) {
addBenchmark(file, name, [=](unsigned int times) {
unsigned int niter = 0;
while (times-- > 0) {
niter += lambda();
}
return niter;
});
}
/**
* Call doNotOptimizeAway(var) to ensure that var will be computed even
* post-optimization. Use it for variables that are computed during
* benchmarking but otherwise are useless. The compiler tends to do a
* good job at eliminating unused variables, and this function fools it
* into thinking var is in fact needed.
*
* Call makeUnpredictable(var) when you don't want the optimizer to use
* its knowledge of var to shape the following code. This is useful
* when constant propagation or power reduction is possible during your
* benchmark but not in real use cases.
*/
#ifdef _MSC_VER
#pragma optimize("", off)
inline void doNotOptimizeDependencySink(const void*) {}
#pragma optimize("", on)
template <class T>
void doNotOptimizeAway(const T& datum) {
doNotOptimizeDependencySink(&datum);
}
template <typename T>
void makeUnpredictable(T& datum) {
doNotOptimizeDependencySink(&datum);
}
#else
namespace detail {
template <typename T>
struct DoNotOptimizeAwayNeedsIndirect {
using Decayed = typename std::decay<T>::type;
// First two constraints ensure it can be an "r" operand.
// std::is_pointer check is because callers seem to expect that
// doNotOptimizeAway(&x) is equivalent to doNotOptimizeAway(x).
constexpr static bool value = !folly::is_trivially_copyable<Decayed>::value ||
sizeof(Decayed) > sizeof(long) || std::is_pointer<Decayed>::value;
};
} // namespace detail
template <typename T>
auto doNotOptimizeAway(const T& datum) -> typename std::enable_if<
!detail::DoNotOptimizeAwayNeedsIndirect<T>::value>::type {
// The "r" constraint forces the compiler to make datum available
// in a register to the asm block, which means that it must have
// computed/loaded it. We use this path for things that are <=
// sizeof(long) (they have to fit), trivial (otherwise the compiler
// doesn't want to put them in a register), and not a pointer (because
// doNotOptimizeAway(&foo) would otherwise be a foot gun that didn't
// necessarily compute foo).
//
// An earlier version of this method had a more permissive input operand
// constraint, but that caused unnecessary variation between clang and
// gcc benchmarks.
asm volatile("" ::"r"(datum));
}
template <typename T>
auto doNotOptimizeAway(const T& datum) -> typename std::enable_if<
detail::DoNotOptimizeAwayNeedsIndirect<T>::value>::type {
// This version of doNotOptimizeAway tells the compiler that the asm
// block will read datum from memory, and that in addition it might read
// or write from any memory location. If the memory clobber could be
// separated into input and output that would be preferrable.
asm volatile("" ::"m"(datum) : "memory");
}
template <typename T>
auto makeUnpredictable(T& datum) -> typename std::enable_if<
!detail::DoNotOptimizeAwayNeedsIndirect<T>::value>::type {
asm volatile("" : "+r"(datum));
}
template <typename T>
auto makeUnpredictable(T& datum) -> typename std::enable_if<
detail::DoNotOptimizeAwayNeedsIndirect<T>::value>::type {
asm volatile("" ::"m"(datum) : "memory");
}
#endif
struct dynamic;
void benchmarkResultsToDynamic(
const std::vector<detail::BenchmarkResult>& data,
dynamic&);
void benchmarkResultsFromDynamic(
const dynamic&,
std::vector<detail::BenchmarkResult>&);
void printResultComparison(
const std::vector<detail::BenchmarkResult>& base,
const std::vector<detail::BenchmarkResult>& test);
} // namespace folly
/**
* Introduces a benchmark function. Used internally, see BENCHMARK and
* friends below.
*/
#define BENCHMARK_IMPL(funName, stringName, rv, paramType, paramName) \
static void funName(paramType); \
static bool FB_ANONYMOUS_VARIABLE(follyBenchmarkUnused) = \
(::folly::addBenchmark( \
__FILE__, \
stringName, \
[](paramType paramName) -> unsigned { \
funName(paramName); \
return rv; \
}), \
true); \
static void funName(paramType paramName)
/**
* Introduces a benchmark function with support for returning the actual
* number of iterations. Used internally, see BENCHMARK_MULTI and friends
* below.
*/
#define BENCHMARK_MULTI_IMPL(funName, stringName, paramType, paramName) \
static unsigned funName(paramType); \
static bool FB_ANONYMOUS_VARIABLE(follyBenchmarkUnused) = \
(::folly::addBenchmark( \
__FILE__, \
stringName, \
[](paramType paramName) { return funName(paramName); }), \
true); \
static unsigned funName(paramType paramName)
/**
* Introduces a benchmark function. Use with either one or two arguments.
* The first is the name of the benchmark. Use something descriptive, such
* as insertVectorBegin. The second argument may be missing, or could be a
* symbolic counter. The counter dictates how many internal iteration the
* benchmark does. Example:
*
* BENCHMARK(vectorPushBack) {
* vector<int> v;
* v.push_back(42);
* }
*
* BENCHMARK(insertVectorBegin, n) {
* vector<int> v;
* FOR_EACH_RANGE (i, 0, n) {
* v.insert(v.begin(), 42);
* }
* }
*/
#define BENCHMARK(name, ...) \
BENCHMARK_IMPL( \
name, \
FB_STRINGIZE(name), \
FB_ARG_2_OR_1(1, ##__VA_ARGS__), \
FB_ONE_OR_NONE(unsigned, ##__VA_ARGS__), \
__VA_ARGS__)
/**
* Like BENCHMARK above, but allows the user to return the actual
* number of iterations executed in the function body. This can be
* useful if the benchmark function doesn't know upfront how many
* iterations it's going to run or if it runs through a certain
* number of test cases, e.g.:
*
* BENCHMARK_MULTI(benchmarkSomething) {
* std::vector<int> testCases { 0, 1, 1, 2, 3, 5 };
* for (int c : testCases) {
* doSomething(c);
* }
* return testCases.size();
* }
*/
#define BENCHMARK_MULTI(name, ...) \
BENCHMARK_MULTI_IMPL( \
name, \
FB_STRINGIZE(name), \
FB_ONE_OR_NONE(unsigned, ##__VA_ARGS__), \
__VA_ARGS__)
/**
* Defines a benchmark that passes a parameter to another one. This is
* common for benchmarks that need a "problem size" in addition to
* "number of iterations". Consider:
*
* void pushBack(uint32_t n, size_t initialSize) {
* vector<int> v;
* BENCHMARK_SUSPEND {
* v.resize(initialSize);
* }
* FOR_EACH_RANGE (i, 0, n) {
* v.push_back(i);
* }
* }
* BENCHMARK_PARAM(pushBack, 0)
* BENCHMARK_PARAM(pushBack, 1000)
* BENCHMARK_PARAM(pushBack, 1000000)
*
* The benchmark above estimates the speed of push_back at different
* initial sizes of the vector. The framework will pass 0, 1000, and
* 1000000 for initialSize, and the iteration count for n.
*/
#define BENCHMARK_PARAM(name, param) BENCHMARK_NAMED_PARAM(name, param, param)
/**
* Same as BENCHMARK_PARAM, but allows one to return the actual number of
* iterations that have been run.
*/
#define BENCHMARK_PARAM_MULTI(name, param) \
BENCHMARK_NAMED_PARAM_MULTI(name, param, param)
/*
* Like BENCHMARK_PARAM(), but allows a custom name to be specified for each
* parameter, rather than using the parameter value.
*
* Useful when the parameter value is not a valid token for string pasting,
* of when you want to specify multiple parameter arguments.
*
* For example:
*
* void addValue(uint32_t n, int64_t bucketSize, int64_t min, int64_t max) {
* Histogram<int64_t> hist(bucketSize, min, max);
* int64_t num = min;
* FOR_EACH_RANGE (i, 0, n) {
* hist.addValue(num);
* ++num;
* if (num > max) { num = min; }
* }
* }
*
* BENCHMARK_NAMED_PARAM(addValue, 0_to_100, 1, 0, 100)
* BENCHMARK_NAMED_PARAM(addValue, 0_to_1000, 10, 0, 1000)
* BENCHMARK_NAMED_PARAM(addValue, 5k_to_20k, 250, 5000, 20000)
*/
#define BENCHMARK_NAMED_PARAM(name, param_name, ...) \
BENCHMARK_IMPL( \
FB_CONCATENATE(name, FB_CONCATENATE(_, param_name)), \
FB_STRINGIZE(name) "(" FB_STRINGIZE(param_name) ")", \
iters, \
unsigned, \
iters) { \
name(iters, ##__VA_ARGS__); \
}
/**
* Same as BENCHMARK_NAMED_PARAM, but allows one to return the actual number
* of iterations that have been run.
*/
#define BENCHMARK_NAMED_PARAM_MULTI(name, param_name, ...) \
BENCHMARK_MULTI_IMPL( \
FB_CONCATENATE(name, FB_CONCATENATE(_, param_name)), \
FB_STRINGIZE(name) "(" FB_STRINGIZE(param_name) ")", \
unsigned, \
iters) { \
return name(iters, ##__VA_ARGS__); \
}
/**
* Just like BENCHMARK, but prints the time relative to a
* baseline. The baseline is the most recent BENCHMARK() seen in
* the current scope. Example:
*
* // This is the baseline
* BENCHMARK(insertVectorBegin, n) {
* vector<int> v;
* FOR_EACH_RANGE (i, 0, n) {
* v.insert(v.begin(), 42);
* }
* }
*
* BENCHMARK_RELATIVE(insertListBegin, n) {
* list<int> s;
* FOR_EACH_RANGE (i, 0, n) {
* s.insert(s.begin(), 42);
* }
* }
*
* Any number of relative benchmark can be associated with a
* baseline. Another BENCHMARK() occurrence effectively establishes a
* new baseline.
*/
#define BENCHMARK_RELATIVE(name, ...) \
BENCHMARK_IMPL( \
name, \
"%" FB_STRINGIZE(name), \
FB_ARG_2_OR_1(1, ##__VA_ARGS__), \
FB_ONE_OR_NONE(unsigned, ##__VA_ARGS__), \
__VA_ARGS__)
/**
* Same as BENCHMARK_RELATIVE, but allows one to return the actual number
* of iterations that have been run.
*/
#define BENCHMARK_RELATIVE_MULTI(name, ...) \
BENCHMARK_MULTI_IMPL( \
name, \
"%" FB_STRINGIZE(name), \
FB_ONE_OR_NONE(unsigned, ##__VA_ARGS__), \
__VA_ARGS__)
/**
* A combination of BENCHMARK_RELATIVE and BENCHMARK_PARAM.
*/
#define BENCHMARK_RELATIVE_PARAM(name, param) \
BENCHMARK_RELATIVE_NAMED_PARAM(name, param, param)
/**
* Same as BENCHMARK_RELATIVE_PARAM, but allows one to return the actual
* number of iterations that have been run.
*/
#define BENCHMARK_RELATIVE_PARAM_MULTI(name, param) \
BENCHMARK_RELATIVE_NAMED_PARAM_MULTI(name, param, param)
/**
* A combination of BENCHMARK_RELATIVE and BENCHMARK_NAMED_PARAM.
*/
#define BENCHMARK_RELATIVE_NAMED_PARAM(name, param_name, ...) \
BENCHMARK_IMPL( \
FB_CONCATENATE(name, FB_CONCATENATE(_, param_name)), \
"%" FB_STRINGIZE(name) "(" FB_STRINGIZE(param_name) ")", \
iters, \
unsigned, \
iters) { \
name(iters, ##__VA_ARGS__); \
}
/**
* Same as BENCHMARK_RELATIVE_NAMED_PARAM, but allows one to return the
* actual number of iterations that have been run.
*/
#define BENCHMARK_RELATIVE_NAMED_PARAM_MULTI(name, param_name, ...) \
BENCHMARK_MULTI_IMPL( \
FB_CONCATENATE(name, FB_CONCATENATE(_, param_name)), \
"%" FB_STRINGIZE(name) "(" FB_STRINGIZE(param_name) ")", \
unsigned, \
iters) { \
return name(iters, ##__VA_ARGS__); \
}
/**
* Draws a line of dashes.
*/
#define BENCHMARK_DRAW_LINE() \
static bool FB_ANONYMOUS_VARIABLE(follyBenchmarkUnused) = \
(::folly::addBenchmark(__FILE__, "-", []() -> unsigned { return 0; }), \
true)
/**
* Allows execution of code that doesn't count torward the benchmark's
* time budget. Example:
*
* BENCHMARK_START_GROUP(insertVectorBegin, n) {
* vector<int> v;
* BENCHMARK_SUSPEND {
* v.reserve(n);
* }
* FOR_EACH_RANGE (i, 0, n) {
* v.insert(v.begin(), 42);
* }
* }
*/
#define BENCHMARK_SUSPEND \
if (auto FB_ANONYMOUS_VARIABLE(BENCHMARK_SUSPEND) = \
::folly::BenchmarkSuspender()) { \
} else

Просмотреть файл

@ -1,17 +0,0 @@
/*
* Copyright 2011-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/lang/Bits.h> // @shim

Просмотреть файл

@ -1,15 +0,0 @@
add_library(
follybenchmark
Benchmark.cpp
)
target_link_libraries(follybenchmark PUBLIC folly)
apply_folly_compile_options_to_target(follybenchmark)
install(
TARGETS follybenchmark
EXPORT folly
RUNTIME DESTINATION ${BIN_INSTALL_DIR}
LIBRARY DESTINATION ${LIB_INSTALL_DIR}
ARCHIVE DESTINATION ${LIB_INSTALL_DIR}
)
add_subdirectory(experimental/exception_tracer)

Просмотреть файл

@ -1,194 +0,0 @@
/*
* Copyright 2013-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
/* These definitions are in a separate file so that they
* may be included from C- as well as C++-based projects. */
#include <folly/portability/Config.h>
/**
* Portable version check.
*/
#ifndef __GNUC_PREREQ
#if defined __GNUC__ && defined __GNUC_MINOR__
/* nolint */
#define __GNUC_PREREQ(maj, min) \
((__GNUC__ << 16) + __GNUC_MINOR__ >= ((maj) << 16) + (min))
#else
/* nolint */
#define __GNUC_PREREQ(maj, min) 0
#endif
#endif
// portable version check for clang
#ifndef __CLANG_PREREQ
#if defined __clang__ && defined __clang_major__ && defined __clang_minor__
/* nolint */
#define __CLANG_PREREQ(maj, min) \
((__clang_major__ << 16) + __clang_minor__ >= ((maj) << 16) + (min))
#else
/* nolint */
#define __CLANG_PREREQ(maj, min) 0
#endif
#endif
#if defined(__has_builtin)
#define FOLLY_HAS_BUILTIN(...) __has_builtin(__VA_ARGS__)
#else
#define FOLLY_HAS_BUILTIN(...) 0
#endif
#if defined(__has_feature)
#define FOLLY_HAS_FEATURE(...) __has_feature(__VA_ARGS__)
#else
#define FOLLY_HAS_FEATURE(...) 0
#endif
/* FOLLY_SANITIZE_ADDRESS is defined to 1 if the current compilation unit
* is being compiled with ASAN enabled.
*
* Beware when using this macro in a header file: this macro may change values
* across compilation units if some libraries are built with ASAN enabled
* and some built with ASAN disabled. For instance, this may occur, if folly
* itself was compiled without ASAN but a downstream project that uses folly is
* compiling with ASAN enabled.
*
* Use FOLLY_ASAN_ENABLED (defined in folly-config.h) to check if folly itself
* was compiled with ASAN enabled.
*/
#if FOLLY_HAS_FEATURE(address_sanitizer) || __SANITIZE_ADDRESS__
#define FOLLY_SANITIZE_ADDRESS 1
#endif
/* Define attribute wrapper for function attribute used to disable
* address sanitizer instrumentation. Unfortunately, this attribute
* has issues when inlining is used, so disable that as well. */
#ifdef FOLLY_SANITIZE_ADDRESS
#if defined(__clang__)
#if __has_attribute(__no_sanitize__)
#define FOLLY_DISABLE_ADDRESS_SANITIZER \
__attribute__((__no_sanitize__("address"), __noinline__))
#elif __has_attribute(__no_address_safety_analysis__)
#define FOLLY_DISABLE_ADDRESS_SANITIZER \
__attribute__((__no_address_safety_analysis__, __noinline__))
#elif __has_attribute(__no_sanitize_address__)
#define FOLLY_DISABLE_ADDRESS_SANITIZER \
__attribute__((__no_sanitize_address__, __noinline__))
#endif
#elif defined(__GNUC__)
#define FOLLY_DISABLE_ADDRESS_SANITIZER \
__attribute__((__no_address_safety_analysis__, __noinline__))
#endif
#endif
#ifndef FOLLY_DISABLE_ADDRESS_SANITIZER
#define FOLLY_DISABLE_ADDRESS_SANITIZER
#endif
/* Define a convenience macro to test when thread sanitizer is being used
* across the different compilers (e.g. clang, gcc) */
#if FOLLY_HAS_FEATURE(thread_sanitizer) || __SANITIZE_THREAD__
#define FOLLY_SANITIZE_THREAD 1
#endif
/**
* Define a convenience macro to test when ASAN, UBSAN or TSAN sanitizer are
* being used
*/
#if defined(FOLLY_SANITIZE_ADDRESS) || defined(FOLLY_SANITIZE_THREAD)
#define FOLLY_SANITIZE 1
#endif
#if FOLLY_SANITIZE
#define FOLLY_DISABLE_UNDEFINED_BEHAVIOR_SANITIZER(...) \
__attribute__((no_sanitize(__VA_ARGS__)))
#else
#define FOLLY_DISABLE_UNDEFINED_BEHAVIOR_SANITIZER(...)
#endif // FOLLY_SANITIZE
/**
* Macro for marking functions as having public visibility.
*/
#if defined(__GNUC__)
#if __GNUC_PREREQ(4, 9)
#define FOLLY_EXPORT [[gnu::visibility("default")]]
#else
#define FOLLY_EXPORT __attribute__((__visibility__("default")))
#endif
#else
#define FOLLY_EXPORT
#endif
// noinline
#ifdef _MSC_VER
#define FOLLY_NOINLINE __declspec(noinline)
#elif defined(__clang__) || defined(__GNUC__)
#define FOLLY_NOINLINE __attribute__((__noinline__))
#else
#define FOLLY_NOINLINE
#endif
// always inline
#ifdef _MSC_VER
#define FOLLY_ALWAYS_INLINE __forceinline
#elif defined(__clang__) || defined(__GNUC__)
#define FOLLY_ALWAYS_INLINE inline __attribute__((__always_inline__))
#else
#define FOLLY_ALWAYS_INLINE inline
#endif
// attribute hidden
#if _MSC_VER
#define FOLLY_ATTR_VISIBILITY_HIDDEN
#elif defined(__clang__) || defined(__GNUC__)
#define FOLLY_ATTR_VISIBILITY_HIDDEN __attribute__((__visibility__("hidden")))
#else
#define FOLLY_ATTR_VISIBILITY_HIDDEN
#endif
// An attribute for marking symbols as weak, if supported
#if FOLLY_HAVE_WEAK_SYMBOLS
#define FOLLY_ATTR_WEAK __attribute__((__weak__))
#else
#define FOLLY_ATTR_WEAK
#endif
// Microsoft ABI version (can be overridden manually if necessary)
#ifndef FOLLY_MICROSOFT_ABI_VER
#ifdef _MSC_VER
#define FOLLY_MICROSOFT_ABI_VER _MSC_VER
#endif
#endif
// These functions are defined by the TSAN runtime library and enable
// annotating mutexes for TSAN.
extern "C" FOLLY_ATTR_WEAK void
AnnotateRWLockCreate(const char* f, int l, const volatile void* addr);
extern "C" FOLLY_ATTR_WEAK void
AnnotateRWLockCreateStatic(const char* f, int l, const volatile void* addr);
extern "C" FOLLY_ATTR_WEAK void
AnnotateRWLockDestroy(const char* f, int l, const volatile void* addr);
extern "C" FOLLY_ATTR_WEAK void
AnnotateRWLockAcquired(const char* f, int l, const volatile void* addr, long w);
extern "C" FOLLY_ATTR_WEAK void
AnnotateRWLockReleased(const char* f, int l, const volatile void* addr, long w);
extern "C" FOLLY_ATTR_WEAK void AnnotateBenignRaceSized(
const char* f,
int l,
const volatile void* addr,
long size,
const char* desc);

Просмотреть файл

@ -1,77 +0,0 @@
/*
* Copyright 2016-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cstddef>
#include <utility>
#include <folly/lang/Align.h>
namespace folly {
/**
* Holds a type T, in addition to enough padding to ensure that it isn't subject
* to false sharing within the range used by folly.
*
* If `sizeof(T) <= alignof(T)` then the inner `T` will be entirely within one
* false sharing range (AKA cache line).
*/
template <typename T>
class CachelinePadded {
static_assert(
alignof(T) <= max_align_v,
"CachelinePadded does not support over-aligned types.");
public:
template <typename... Args>
explicit CachelinePadded(Args&&... args)
: inner_(std::forward<Args>(args)...) {}
T* get() {
return &inner_;
}
const T* get() const {
return &inner_;
}
T* operator->() {
return get();
}
const T* operator->() const {
return get();
}
T& operator*() {
return *get();
}
const T& operator*() const {
return *get();
}
private:
static constexpr size_t paddingSize() noexcept {
return hardware_destructive_interference_size -
(alignof(T) % hardware_destructive_interference_size);
}
char paddingPre_[paddingSize()];
T inner_;
char paddingPost_[paddingSize()];
};
} // namespace folly

Просмотреть файл

@ -1,190 +0,0 @@
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <chrono>
#include <stdexcept>
#include <type_traits>
#include <folly/Portability.h>
#include <folly/lang/Exception.h>
#include <folly/portability/Time.h>
/***
* include or backport:
* * std::chrono::ceil
* * std::chrono::floor
* * std::chrono::round
*/
#if __cpp_lib_chrono >= 201510 || _MSC_VER
namespace folly {
namespace chrono {
/* using override */ using std::chrono::ceil;
/* using override */ using std::chrono::floor;
/* using override */ using std::chrono::round;
} // namespace chrono
} // namespace folly
#else
namespace folly {
namespace chrono {
namespace detail {
// from: http://en.cppreference.com/w/cpp/chrono/duration/ceil, CC-BY-SA
template <typename T>
struct is_duration : std::false_type {};
template <typename Rep, typename Period>
struct is_duration<std::chrono::duration<Rep, Period>> : std::true_type {};
template <typename To, typename Duration>
constexpr To ceil_impl(Duration const& d, To const& t) {
return t < d ? t + To{1} : t;
}
template <typename To, typename Duration>
constexpr To floor_impl(Duration const& d, To const& t) {
return t > d ? t - To{1} : t;
}
template <typename To, typename Diff>
constexpr To round_impl(To const& t0, To const& t1, Diff diff0, Diff diff1) {
return diff0 < diff1 ? t0 : diff1 < diff0 ? t1 : t0.count() & 1 ? t1 : t0;
}
template <typename To, typename Duration>
constexpr To round_impl(Duration const& d, To const& t0, To const& t1) {
return round_impl(t0, t1, d - t0, t1 - d);
}
template <typename To, typename Duration>
constexpr To round_impl(Duration const& d, To const& t0) {
return round_impl(d, t0, t0 + To{1});
}
} // namespace detail
// mimic: std::chrono::ceil, C++17
// from: http://en.cppreference.com/w/cpp/chrono/duration/ceil, CC-BY-SA
template <
typename To,
typename Rep,
typename Period,
typename = typename std::enable_if<detail::is_duration<To>::value>::type>
constexpr To ceil(std::chrono::duration<Rep, Period> const& d) {
return detail::ceil_impl(d, std::chrono::duration_cast<To>(d));
}
// mimic: std::chrono::ceil, C++17
// from: http://en.cppreference.com/w/cpp/chrono/time_point/ceil, CC-BY-SA
template <
typename To,
typename Clock,
typename Duration,
typename = typename std::enable_if<detail::is_duration<To>::value>::type>
constexpr std::chrono::time_point<Clock, To> ceil(
std::chrono::time_point<Clock, Duration> const& tp) {
return std::chrono::time_point<Clock, To>{ceil<To>(tp.time_since_epoch())};
}
// mimic: std::chrono::floor, C++17
// from: http://en.cppreference.com/w/cpp/chrono/duration/floor, CC-BY-SA
template <
typename To,
typename Rep,
typename Period,
typename = typename std::enable_if<detail::is_duration<To>::value>::type>
constexpr To floor(std::chrono::duration<Rep, Period> const& d) {
return detail::floor_impl(d, std::chrono::duration_cast<To>(d));
}
// mimic: std::chrono::floor, C++17
// from: http://en.cppreference.com/w/cpp/chrono/time_point/floor, CC-BY-SA
template <
typename To,
typename Clock,
typename Duration,
typename = typename std::enable_if<detail::is_duration<To>::value>::type>
constexpr std::chrono::time_point<Clock, To> floor(
std::chrono::time_point<Clock, Duration> const& tp) {
return std::chrono::time_point<Clock, To>{floor<To>(tp.time_since_epoch())};
}
// mimic: std::chrono::round, C++17
// from: http://en.cppreference.com/w/cpp/chrono/duration/round, CC-BY-SA
template <
typename To,
typename Rep,
typename Period,
typename = typename std::enable_if<
detail::is_duration<To>::value &&
!std::chrono::treat_as_floating_point<typename To::rep>::value>::type>
constexpr To round(std::chrono::duration<Rep, Period> const& d) {
return detail::round_impl(d, floor<To>(d));
}
// mimic: std::chrono::round, C++17
// from: http://en.cppreference.com/w/cpp/chrono/time_point/round, CC-BY-SA
template <
typename To,
typename Clock,
typename Duration,
typename = typename std::enable_if<
detail::is_duration<To>::value &&
!std::chrono::treat_as_floating_point<typename To::rep>::value>::type>
constexpr std::chrono::time_point<Clock, To> round(
std::chrono::time_point<Clock, Duration> const& tp) {
return std::chrono::time_point<Clock, To>{round<To>(tp.time_since_epoch())};
}
} // namespace chrono
} // namespace folly
#endif
namespace folly {
namespace chrono {
struct coarse_steady_clock {
using rep = std::chrono::milliseconds::rep;
using period = std::chrono::milliseconds::period;
using duration = std::chrono::duration<rep, period>;
using time_point = std::chrono::time_point<coarse_steady_clock, duration>;
constexpr static bool is_steady = true;
static time_point now() {
#ifndef CLOCK_MONOTONIC_COARSE
return time_point(std::chrono::duration_cast<duration>(
std::chrono::steady_clock::now().time_since_epoch()));
#else
timespec ts;
auto ret = clock_gettime(CLOCK_MONOTONIC_COARSE, &ts);
if (ret != 0) {
throw_exception<std::runtime_error>(
"Error using CLOCK_MONOTONIC_COARSE.");
}
return time_point(std::chrono::duration_cast<duration>(
std::chrono::seconds(ts.tv_sec) +
std::chrono::nanoseconds(ts.tv_nsec)));
#endif
}
};
} // namespace chrono
} // namespace folly

Просмотреть файл

@ -1,89 +0,0 @@
/*
* Copyright 2016-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/ClockGettimeWrappers.h>
#include <folly/Likely.h>
#include <folly/portability/Time.h>
#include <chrono>
#include <time.h>
#ifndef _WIN32
#define _GNU_SOURCE 1
#include <dlfcn.h>
#endif
namespace folly {
namespace chrono {
static int64_t clock_gettime_ns_fallback(clockid_t clock) {
struct timespec ts;
int r = clock_gettime(clock, &ts);
if (UNLIKELY(r != 0)) {
// Mimic what __clock_gettime_ns does (even though this can be a legit
// value).
return -1;
}
std::chrono::nanoseconds result =
std::chrono::seconds(ts.tv_sec) + std::chrono::nanoseconds(ts.tv_nsec);
return result.count();
}
// Initialize with default behavior, which we might override on Linux hosts
// with VDSO support.
int (*clock_gettime)(clockid_t, timespec* ts) = &::clock_gettime;
int64_t (*clock_gettime_ns)(clockid_t) = &clock_gettime_ns_fallback;
#ifdef FOLLY_HAVE_LINUX_VDSO
namespace {
struct VdsoInitializer {
VdsoInitializer() {
m_handle = dlopen("linux-vdso.so.1", RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
if (!m_handle) {
return;
}
void* p = dlsym(m_handle, "__vdso_clock_gettime");
if (p) {
folly::chrono::clock_gettime = (int (*)(clockid_t, timespec*))p;
}
p = dlsym(m_handle, "__vdso_clock_gettime_ns");
if (p) {
folly::chrono::clock_gettime_ns = (int64_t(*)(clockid_t))p;
}
}
~VdsoInitializer() {
if (m_handle) {
clock_gettime = &::clock_gettime;
clock_gettime_ns = &clock_gettime_ns_fallback;
dlclose(m_handle);
}
}
private:
void* m_handle;
};
static const VdsoInitializer vdso_initializer;
} // namespace
#endif
} // namespace chrono
} // namespace folly

Просмотреть файл

@ -1,29 +0,0 @@
/*
* Copyright 2016-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <folly/portability/Time.h>
#include <time.h>
namespace folly {
namespace chrono {
extern int (*clock_gettime)(clockid_t, timespec* ts);
extern int64_t (*clock_gettime_ns)(clockid_t);
} // namespace chrono
} // namespace folly

Просмотреть файл

@ -1,376 +0,0 @@
/*
* Copyright 2011-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// @author: Xin Liu <xliux@fb.com>
#pragma once
#include <algorithm>
#include <atomic>
#include <climits>
#include <cmath>
#include <memory>
#include <mutex>
#include <type_traits>
#include <vector>
#include <boost/noncopyable.hpp>
#include <boost/random.hpp>
#include <boost/type_traits.hpp>
#include <glog/logging.h>
#include <folly/Memory.h>
#include <folly/ThreadLocal.h>
#include <folly/synchronization/MicroSpinLock.h>
namespace folly {
namespace detail {
template <typename ValT, typename NodeT>
class csl_iterator;
template <typename T>
class SkipListNode : private boost::noncopyable {
enum : uint16_t {
IS_HEAD_NODE = 1,
MARKED_FOR_REMOVAL = (1 << 1),
FULLY_LINKED = (1 << 2),
};
public:
typedef T value_type;
template <
typename NodeAlloc,
typename U,
typename =
typename std::enable_if<std::is_convertible<U, T>::value>::type>
static SkipListNode*
create(NodeAlloc& alloc, int height, U&& data, bool isHead = false) {
DCHECK(height >= 1 && height < 64) << height;
size_t size =
sizeof(SkipListNode) + height * sizeof(std::atomic<SkipListNode*>);
auto storage = std::allocator_traits<NodeAlloc>::allocate(alloc, size);
// do placement new
return new (storage)
SkipListNode(uint8_t(height), std::forward<U>(data), isHead);
}
template <typename NodeAlloc>
static void destroy(NodeAlloc& alloc, SkipListNode* node) {
size_t size = sizeof(SkipListNode) +
node->height_ * sizeof(std::atomic<SkipListNode*>);
node->~SkipListNode();
std::allocator_traits<NodeAlloc>::deallocate(alloc, node, size);
}
template <typename NodeAlloc>
struct DestroyIsNoOp : StrictConjunction<
AllocatorHasTrivialDeallocate<NodeAlloc>,
boost::has_trivial_destructor<SkipListNode>> {};
// copy the head node to a new head node assuming lock acquired
SkipListNode* copyHead(SkipListNode* node) {
DCHECK(node != nullptr && height_ > node->height_);
setFlags(node->getFlags());
for (uint8_t i = 0; i < node->height_; ++i) {
setSkip(i, node->skip(i));
}
return this;
}
inline SkipListNode* skip(int layer) const {
DCHECK_LT(layer, height_);
return skip_[layer].load(std::memory_order_consume);
}
// next valid node as in the linked list
SkipListNode* next() {
SkipListNode* node;
for (node = skip(0); (node != nullptr && node->markedForRemoval());
node = node->skip(0)) {
}
return node;
}
void setSkip(uint8_t h, SkipListNode* next) {
DCHECK_LT(h, height_);
skip_[h].store(next, std::memory_order_release);
}
value_type& data() {
return data_;
}
const value_type& data() const {
return data_;
}
int maxLayer() const {
return height_ - 1;
}
int height() const {
return height_;
}
std::unique_lock<MicroSpinLock> acquireGuard() {
return std::unique_lock<MicroSpinLock>(spinLock_);
}
bool fullyLinked() const {
return getFlags() & FULLY_LINKED;
}
bool markedForRemoval() const {
return getFlags() & MARKED_FOR_REMOVAL;
}
bool isHeadNode() const {
return getFlags() & IS_HEAD_NODE;
}
void setIsHeadNode() {
setFlags(uint16_t(getFlags() | IS_HEAD_NODE));
}
void setFullyLinked() {
setFlags(uint16_t(getFlags() | FULLY_LINKED));
}
void setMarkedForRemoval() {
setFlags(uint16_t(getFlags() | MARKED_FOR_REMOVAL));
}
private:
// Note! this can only be called from create() as a placement new.
template <typename U>
SkipListNode(uint8_t height, U&& data, bool isHead)
: height_(height), data_(std::forward<U>(data)) {
spinLock_.init();
setFlags(0);
if (isHead) {
setIsHeadNode();
}
// need to explicitly init the dynamic atomic pointer array
for (uint8_t i = 0; i < height_; ++i) {
new (&skip_[i]) std::atomic<SkipListNode*>(nullptr);
}
}
~SkipListNode() {
for (uint8_t i = 0; i < height_; ++i) {
skip_[i].~atomic();
}
}
uint16_t getFlags() const {
return flags_.load(std::memory_order_consume);
}
void setFlags(uint16_t flags) {
flags_.store(flags, std::memory_order_release);
}
// TODO(xliu): on x86_64, it's possible to squeeze these into
// skip_[0] to maybe save 8 bytes depending on the data alignments.
// NOTE: currently this is x86_64 only anyway, due to the
// MicroSpinLock.
std::atomic<uint16_t> flags_;
const uint8_t height_;
MicroSpinLock spinLock_;
value_type data_;
std::atomic<SkipListNode*> skip_[0];
};
class SkipListRandomHeight {
enum { kMaxHeight = 64 };
public:
// make it a singleton.
static SkipListRandomHeight* instance() {
static SkipListRandomHeight instance_;
return &instance_;
}
int getHeight(int maxHeight) const {
DCHECK_LE(maxHeight, kMaxHeight) << "max height too big!";
double p = randomProb();
for (int i = 0; i < maxHeight; ++i) {
if (p < lookupTable_[i]) {
return i + 1;
}
}
return maxHeight;
}
size_t getSizeLimit(int height) const {
DCHECK_LT(height, kMaxHeight);
return sizeLimitTable_[height];
}
private:
SkipListRandomHeight() {
initLookupTable();
}
void initLookupTable() {
// set skip prob = 1/E
static const double kProbInv = exp(1);
static const double kProb = 1.0 / kProbInv;
static const size_t kMaxSizeLimit = std::numeric_limits<size_t>::max();
double sizeLimit = 1;
double p = lookupTable_[0] = (1 - kProb);
sizeLimitTable_[0] = 1;
for (int i = 1; i < kMaxHeight - 1; ++i) {
p *= kProb;
sizeLimit *= kProbInv;
lookupTable_[i] = lookupTable_[i - 1] + p;
sizeLimitTable_[i] = sizeLimit > kMaxSizeLimit
? kMaxSizeLimit
: static_cast<size_t>(sizeLimit);
}
lookupTable_[kMaxHeight - 1] = 1;
sizeLimitTable_[kMaxHeight - 1] = kMaxSizeLimit;
}
static double randomProb() {
static ThreadLocal<boost::lagged_fibonacci2281> rng_;
return (*rng_)();
}
double lookupTable_[kMaxHeight];
size_t sizeLimitTable_[kMaxHeight];
};
template <typename NodeType, typename NodeAlloc, typename = void>
class NodeRecycler;
template <typename NodeType, typename NodeAlloc>
class NodeRecycler<
NodeType,
NodeAlloc,
typename std::enable_if<
!NodeType::template DestroyIsNoOp<NodeAlloc>::value>::type> {
public:
explicit NodeRecycler(const NodeAlloc& alloc)
: refs_(0), dirty_(false), alloc_(alloc) {
lock_.init();
}
explicit NodeRecycler() : refs_(0), dirty_(false) {
lock_.init();
}
~NodeRecycler() {
CHECK_EQ(refs(), 0);
if (nodes_) {
for (auto& node : *nodes_) {
NodeType::destroy(alloc_, node);
}
}
}
void add(NodeType* node) {
std::lock_guard<MicroSpinLock> g(lock_);
if (nodes_.get() == nullptr) {
nodes_ = std::make_unique<std::vector<NodeType*>>(1, node);
} else {
nodes_->push_back(node);
}
DCHECK_GT(refs(), 0);
dirty_.store(true, std::memory_order_relaxed);
}
int addRef() {
return refs_.fetch_add(1, std::memory_order_relaxed);
}
int releaseRef() {
// We don't expect to clean the recycler immediately everytime it is OK
// to do so. Here, it is possible that multiple accessors all release at
// the same time but nobody would clean the recycler here. If this
// happens, the recycler will usually still get cleaned when
// such a race doesn't happen. The worst case is the recycler will
// eventually get deleted along with the skiplist.
if (LIKELY(!dirty_.load(std::memory_order_relaxed) || refs() > 1)) {
return refs_.fetch_add(-1, std::memory_order_relaxed);
}
std::unique_ptr<std::vector<NodeType*>> newNodes;
{
std::lock_guard<MicroSpinLock> g(lock_);
if (nodes_.get() == nullptr || refs() > 1) {
return refs_.fetch_add(-1, std::memory_order_relaxed);
}
// once refs_ reaches 1 and there is no other accessor, it is safe to
// remove all the current nodes in the recycler, as we already acquired
// the lock here so no more new nodes can be added, even though new
// accessors may be added after that.
newNodes.swap(nodes_);
dirty_.store(false, std::memory_order_relaxed);
}
// TODO(xliu) should we spawn a thread to do this when there are large
// number of nodes in the recycler?
for (auto& node : *newNodes) {
NodeType::destroy(alloc_, node);
}
// decrease the ref count at the very end, to minimize the
// chance of other threads acquiring lock_ to clear the deleted
// nodes again.
return refs_.fetch_add(-1, std::memory_order_relaxed);
}
NodeAlloc& alloc() {
return alloc_;
}
private:
int refs() const {
return refs_.load(std::memory_order_relaxed);
}
std::unique_ptr<std::vector<NodeType*>> nodes_;
std::atomic<int32_t> refs_; // current number of visitors to the list
std::atomic<bool> dirty_; // whether *nodes_ is non-empty
MicroSpinLock lock_; // protects access to *nodes_
NodeAlloc alloc_;
};
// In case of arena allocator, no recycling is necessary, and it's possible
// to save on ConcurrentSkipList size.
template <typename NodeType, typename NodeAlloc>
class NodeRecycler<
NodeType,
NodeAlloc,
typename std::enable_if<
NodeType::template DestroyIsNoOp<NodeAlloc>::value>::type> {
public:
explicit NodeRecycler(const NodeAlloc& alloc) : alloc_(alloc) {}
void addRef() {}
void releaseRef() {}
void add(NodeType* /* node */) {}
NodeAlloc& alloc() {
return alloc_;
}
private:
NodeAlloc alloc_;
};
} // namespace detail
} // namespace folly

Просмотреть файл

@ -1,877 +0,0 @@
/*
* Copyright 2011-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// @author: Xin Liu <xliux@fb.com>
//
// A concurrent skip list (CSL) implementation.
// Ref: http://www.cs.tau.ac.il/~shanir/nir-pubs-web/Papers/OPODIS2006-BA.pdf
/*
This implements a sorted associative container that supports only
unique keys. (Similar to std::set.)
Features:
1. Small memory overhead: ~40% less memory overhead compared with
std::set (1.6 words per node versus 3). It has an minimum of 4
words (7 words if there nodes got deleted) per-list overhead
though.
2. Read accesses (count, find iterator, skipper) are lock-free and
mostly wait-free (the only wait a reader may need to do is when
the node it is visiting is in a pending stage, i.e. deleting,
adding and not fully linked). Write accesses (remove, add) need
to acquire locks, but locks are local to the predecessor nodes
and/or successor nodes.
3. Good high contention performance, comparable single-thread
performance. In the multithreaded case (12 workers), CSL tested
10x faster than a RWSpinLocked std::set for an averaged sized
list (1K - 1M nodes).
Comparable read performance to std::set when single threaded,
especially when the list size is large, and scales better to
larger lists: when the size is small, CSL can be 20-50% slower on
find()/contains(). As the size gets large (> 1M elements),
find()/contains() can be 30% faster.
Iterating through a skiplist is similar to iterating through a
linked list, thus is much (2-6x) faster than on a std::set
(tree-based). This is especially true for short lists due to
better cache locality. Based on that, it's also faster to
intersect two skiplists.
4. Lazy removal with GC support. The removed nodes get deleted when
the last Accessor to the skiplist is destroyed.
Caveats:
1. Write operations are usually 30% slower than std::set in a single
threaded environment.
2. Need to have a head node for each list, which has a 4 word
overhead.
3. When the list is quite small (< 1000 elements), single threaded
benchmarks show CSL can be 10x slower than std:set.
4. The interface requires using an Accessor to access the skiplist.
(See below.)
5. Currently x64 only, due to use of MicroSpinLock.
6. Freed nodes will not be reclaimed as long as there are ongoing
uses of the list.
Sample usage:
typedef ConcurrentSkipList<int> SkipListT;
shared_ptr<SkipListT> sl(SkipListT::createInstance(init_head_height);
{
// It's usually good practice to hold an accessor only during
// its necessary life cycle (but not in a tight loop as
// Accessor creation incurs ref-counting overhead).
//
// Holding it longer delays garbage-collecting the deleted
// nodes in the list.
SkipListT::Accessor accessor(sl);
accessor.insert(23);
accessor.erase(2);
for (auto &elem : accessor) {
// use elem to access data
}
... ...
}
Another useful type is the Skipper accessor. This is useful if you
want to skip to locations in the way std::lower_bound() works,
i.e. it can be used for going through the list by skipping to the
node no less than a specified key. The Skipper keeps its location as
state, which makes it convenient for things like implementing
intersection of two sets efficiently, as it can start from the last
visited position.
{
SkipListT::Accessor accessor(sl);
SkipListT::Skipper skipper(accessor);
skipper.to(30);
if (skipper) {
CHECK_LE(30, *skipper);
}
... ...
// GC may happen when the accessor gets destructed.
}
*/
#pragma once
#include <algorithm>
#include <atomic>
#include <limits>
#include <memory>
#include <type_traits>
#include <boost/iterator/iterator_facade.hpp>
#include <glog/logging.h>
#include <folly/ConcurrentSkipList-inl.h>
#include <folly/Likely.h>
#include <folly/Memory.h>
#include <folly/synchronization/MicroSpinLock.h>
namespace folly {
template <
typename T,
typename Comp = std::less<T>,
// All nodes are allocated using provided SysAllocator,
// it should be thread-safe.
typename NodeAlloc = SysAllocator<void>,
int MAX_HEIGHT = 24>
class ConcurrentSkipList {
// MAX_HEIGHT needs to be at least 2 to suppress compiler
// warnings/errors (Werror=uninitialized tiggered due to preds_[1]
// being treated as a scalar in the compiler).
static_assert(
MAX_HEIGHT >= 2 && MAX_HEIGHT < 64,
"MAX_HEIGHT can only be in the range of [2, 64)");
typedef std::unique_lock<folly::MicroSpinLock> ScopedLocker;
typedef ConcurrentSkipList<T, Comp, NodeAlloc, MAX_HEIGHT> SkipListType;
public:
typedef detail::SkipListNode<T> NodeType;
typedef T value_type;
typedef T key_type;
typedef detail::csl_iterator<value_type, NodeType> iterator;
typedef detail::csl_iterator<const value_type, const NodeType> const_iterator;
class Accessor;
class Skipper;
explicit ConcurrentSkipList(int height, const NodeAlloc& alloc)
: recycler_(alloc),
head_(NodeType::create(recycler_.alloc(), height, value_type(), true)),
size_(0) {}
explicit ConcurrentSkipList(int height)
: recycler_(),
head_(NodeType::create(recycler_.alloc(), height, value_type(), true)),
size_(0) {}
// Convenient function to get an Accessor to a new instance.
static Accessor create(int height, const NodeAlloc& alloc) {
return Accessor(createInstance(height, alloc));
}
static Accessor create(int height = 1) {
return Accessor(createInstance(height));
}
// Create a shared_ptr skiplist object with initial head height.
static std::shared_ptr<SkipListType> createInstance(
int height,
const NodeAlloc& alloc) {
return std::make_shared<ConcurrentSkipList>(height, alloc);
}
static std::shared_ptr<SkipListType> createInstance(int height = 1) {
return std::make_shared<ConcurrentSkipList>(height);
}
//===================================================================
// Below are implementation details.
// Please see ConcurrentSkipList::Accessor for stdlib-like APIs.
//===================================================================
~ConcurrentSkipList() {
if /* constexpr */ (NodeType::template DestroyIsNoOp<NodeAlloc>::value) {
// Avoid traversing the list if using arena allocator.
return;
}
for (NodeType* current = head_.load(std::memory_order_relaxed); current;) {
NodeType* tmp = current->skip(0);
NodeType::destroy(recycler_.alloc(), current);
current = tmp;
}
}
private:
static bool greater(const value_type& data, const NodeType* node) {
return node && Comp()(node->data(), data);
}
static bool less(const value_type& data, const NodeType* node) {
return (node == nullptr) || Comp()(data, node->data());
}
static int findInsertionPoint(
NodeType* cur,
int cur_layer,
const value_type& data,
NodeType* preds[],
NodeType* succs[]) {
int foundLayer = -1;
NodeType* pred = cur;
NodeType* foundNode = nullptr;
for (int layer = cur_layer; layer >= 0; --layer) {
NodeType* node = pred->skip(layer);
while (greater(data, node)) {
pred = node;
node = node->skip(layer);
}
if (foundLayer == -1 && !less(data, node)) { // the two keys equal
foundLayer = layer;
foundNode = node;
}
preds[layer] = pred;
// if found, succs[0..foundLayer] need to point to the cached foundNode,
// as foundNode might be deleted at the same time thus pred->skip() can
// return nullptr or another node.
succs[layer] = foundNode ? foundNode : node;
}
return foundLayer;
}
size_t size() const {
return size_.load(std::memory_order_relaxed);
}
int height() const {
return head_.load(std::memory_order_consume)->height();
}
int maxLayer() const {
return height() - 1;
}
size_t incrementSize(int delta) {
return size_.fetch_add(delta, std::memory_order_relaxed) + delta;
}
// Returns the node if found, nullptr otherwise.
NodeType* find(const value_type& data) {
auto ret = findNode(data);
if (ret.second && !ret.first->markedForRemoval()) {
return ret.first;
}
return nullptr;
}
// lock all the necessary nodes for changing (adding or removing) the list.
// returns true if all the lock acquried successfully and the related nodes
// are all validate (not in certain pending states), false otherwise.
bool lockNodesForChange(
int nodeHeight,
ScopedLocker guards[MAX_HEIGHT],
NodeType* preds[MAX_HEIGHT],
NodeType* succs[MAX_HEIGHT],
bool adding = true) {
NodeType *pred, *succ, *prevPred = nullptr;
bool valid = true;
for (int layer = 0; valid && layer < nodeHeight; ++layer) {
pred = preds[layer];
DCHECK(pred != nullptr) << "layer=" << layer << " height=" << height()
<< " nodeheight=" << nodeHeight;
succ = succs[layer];
if (pred != prevPred) {
guards[layer] = pred->acquireGuard();
prevPred = pred;
}
valid = !pred->markedForRemoval() &&
pred->skip(layer) == succ; // check again after locking
if (adding) { // when adding a node, the succ shouldn't be going away
valid = valid && (succ == nullptr || !succ->markedForRemoval());
}
}
return valid;
}
// Returns a paired value:
// pair.first always stores the pointer to the node with the same input key.
// It could be either the newly added data, or the existed data in the
// list with the same key.
// pair.second stores whether the data is added successfully:
// 0 means not added, otherwise reutrns the new size.
template <typename U>
std::pair<NodeType*, size_t> addOrGetData(U&& data) {
NodeType *preds[MAX_HEIGHT], *succs[MAX_HEIGHT];
NodeType* newNode;
size_t newSize;
while (true) {
int max_layer = 0;
int layer = findInsertionPointGetMaxLayer(data, preds, succs, &max_layer);
if (layer >= 0) {
NodeType* nodeFound = succs[layer];
DCHECK(nodeFound != nullptr);
if (nodeFound->markedForRemoval()) {
continue; // if it's getting deleted retry finding node.
}
// wait until fully linked.
while (UNLIKELY(!nodeFound->fullyLinked())) {
}
return std::make_pair(nodeFound, 0);
}
// need to capped at the original height -- the real height may have grown
int nodeHeight =
detail::SkipListRandomHeight::instance()->getHeight(max_layer + 1);
ScopedLocker guards[MAX_HEIGHT];
if (!lockNodesForChange(nodeHeight, guards, preds, succs)) {
continue; // give up the locks and retry until all valid
}
// locks acquired and all valid, need to modify the links under the locks.
newNode = NodeType::create(
recycler_.alloc(), nodeHeight, std::forward<U>(data));
for (int k = 0; k < nodeHeight; ++k) {
newNode->setSkip(k, succs[k]);
preds[k]->setSkip(k, newNode);
}
newNode->setFullyLinked();
newSize = incrementSize(1);
break;
}
int hgt = height();
size_t sizeLimit =
detail::SkipListRandomHeight::instance()->getSizeLimit(hgt);
if (hgt < MAX_HEIGHT && newSize > sizeLimit) {
growHeight(hgt + 1);
}
CHECK_GT(newSize, 0);
return std::make_pair(newNode, newSize);
}
bool remove(const value_type& data) {
NodeType* nodeToDelete = nullptr;
ScopedLocker nodeGuard;
bool isMarked = false;
int nodeHeight = 0;
NodeType *preds[MAX_HEIGHT], *succs[MAX_HEIGHT];
while (true) {
int max_layer = 0;
int layer = findInsertionPointGetMaxLayer(data, preds, succs, &max_layer);
if (!isMarked && (layer < 0 || !okToDelete(succs[layer], layer))) {
return false;
}
if (!isMarked) {
nodeToDelete = succs[layer];
nodeHeight = nodeToDelete->height();
nodeGuard = nodeToDelete->acquireGuard();
if (nodeToDelete->markedForRemoval()) {
return false;
}
nodeToDelete->setMarkedForRemoval();
isMarked = true;
}
// acquire pred locks from bottom layer up
ScopedLocker guards[MAX_HEIGHT];
if (!lockNodesForChange(nodeHeight, guards, preds, succs, false)) {
continue; // this will unlock all the locks
}
for (int k = nodeHeight - 1; k >= 0; --k) {
preds[k]->setSkip(k, nodeToDelete->skip(k));
}
incrementSize(-1);
break;
}
recycle(nodeToDelete);
return true;
}
const value_type* first() const {
auto node = head_.load(std::memory_order_consume)->skip(0);
return node ? &node->data() : nullptr;
}
const value_type* last() const {
NodeType* pred = head_.load(std::memory_order_consume);
NodeType* node = nullptr;
for (int layer = maxLayer(); layer >= 0; --layer) {
do {
node = pred->skip(layer);
if (node) {
pred = node;
}
} while (node != nullptr);
}
return pred == head_.load(std::memory_order_relaxed) ? nullptr
: &pred->data();
}
static bool okToDelete(NodeType* candidate, int layer) {
DCHECK(candidate != nullptr);
return candidate->fullyLinked() && candidate->maxLayer() == layer &&
!candidate->markedForRemoval();
}
// find node for insertion/deleting
int findInsertionPointGetMaxLayer(
const value_type& data,
NodeType* preds[],
NodeType* succs[],
int* max_layer) const {
*max_layer = maxLayer();
return findInsertionPoint(
head_.load(std::memory_order_consume), *max_layer, data, preds, succs);
}
// Find node for access. Returns a paired values:
// pair.first = the first node that no-less than data value
// pair.second = 1 when the data value is founded, or 0 otherwise.
// This is like lower_bound, but not exact: we could have the node marked for
// removal so still need to check that.
std::pair<NodeType*, int> findNode(const value_type& data) const {
return findNodeDownRight(data);
}
// Find node by first stepping down then stepping right. Based on benchmark
// results, this is slightly faster than findNodeRightDown for better
// localality on the skipping pointers.
std::pair<NodeType*, int> findNodeDownRight(const value_type& data) const {
NodeType* pred = head_.load(std::memory_order_consume);
int ht = pred->height();
NodeType* node = nullptr;
bool found = false;
while (!found) {
// stepping down
for (; ht > 0 && less(data, node = pred->skip(ht - 1)); --ht) {
}
if (ht == 0) {
return std::make_pair(node, 0); // not found
}
// node <= data now, but we need to fix up ht
--ht;
// stepping right
while (greater(data, node)) {
pred = node;
node = node->skip(ht);
}
found = !less(data, node);
}
return std::make_pair(node, found);
}
// find node by first stepping right then stepping down.
// We still keep this for reference purposes.
std::pair<NodeType*, int> findNodeRightDown(const value_type& data) const {
NodeType* pred = head_.load(std::memory_order_consume);
NodeType* node = nullptr;
auto top = maxLayer();
int found = 0;
for (int layer = top; !found && layer >= 0; --layer) {
node = pred->skip(layer);
while (greater(data, node)) {
pred = node;
node = node->skip(layer);
}
found = !less(data, node);
}
return std::make_pair(node, found);
}
NodeType* lower_bound(const value_type& data) const {
auto node = findNode(data).first;
while (node != nullptr && node->markedForRemoval()) {
node = node->skip(0);
}
return node;
}
void growHeight(int height) {
NodeType* oldHead = head_.load(std::memory_order_consume);
if (oldHead->height() >= height) { // someone else already did this
return;
}
NodeType* newHead =
NodeType::create(recycler_.alloc(), height, value_type(), true);
{ // need to guard the head node in case others are adding/removing
// nodes linked to the head.
ScopedLocker g = oldHead->acquireGuard();
newHead->copyHead(oldHead);
NodeType* expected = oldHead;
if (!head_.compare_exchange_strong(
expected, newHead, std::memory_order_release)) {
// if someone has already done the swap, just return.
NodeType::destroy(recycler_.alloc(), newHead);
return;
}
oldHead->setMarkedForRemoval();
}
recycle(oldHead);
}
void recycle(NodeType* node) {
recycler_.add(node);
}
detail::NodeRecycler<NodeType, NodeAlloc> recycler_;
std::atomic<NodeType*> head_;
std::atomic<size_t> size_;
};
template <typename T, typename Comp, typename NodeAlloc, int MAX_HEIGHT>
class ConcurrentSkipList<T, Comp, NodeAlloc, MAX_HEIGHT>::Accessor {
typedef detail::SkipListNode<T> NodeType;
typedef ConcurrentSkipList<T, Comp, NodeAlloc, MAX_HEIGHT> SkipListType;
public:
typedef T value_type;
typedef T key_type;
typedef T& reference;
typedef T* pointer;
typedef const T& const_reference;
typedef const T* const_pointer;
typedef size_t size_type;
typedef Comp key_compare;
typedef Comp value_compare;
typedef typename SkipListType::iterator iterator;
typedef typename SkipListType::const_iterator const_iterator;
typedef typename SkipListType::Skipper Skipper;
explicit Accessor(std::shared_ptr<ConcurrentSkipList> skip_list)
: slHolder_(std::move(skip_list)) {
sl_ = slHolder_.get();
DCHECK(sl_ != nullptr);
sl_->recycler_.addRef();
}
// Unsafe initializer: the caller assumes the responsibility to keep
// skip_list valid during the whole life cycle of the Acessor.
explicit Accessor(ConcurrentSkipList* skip_list) : sl_(skip_list) {
DCHECK(sl_ != nullptr);
sl_->recycler_.addRef();
}
Accessor(const Accessor& accessor)
: sl_(accessor.sl_), slHolder_(accessor.slHolder_) {
sl_->recycler_.addRef();
}
Accessor& operator=(const Accessor& accessor) {
if (this != &accessor) {
slHolder_ = accessor.slHolder_;
sl_->recycler_.releaseRef();
sl_ = accessor.sl_;
sl_->recycler_.addRef();
}
return *this;
}
~Accessor() {
sl_->recycler_.releaseRef();
}
bool empty() const {
return sl_->size() == 0;
}
size_t size() const {
return sl_->size();
}
size_type max_size() const {
return std::numeric_limits<size_type>::max();
}
// returns end() if the value is not in the list, otherwise returns an
// iterator pointing to the data, and it's guaranteed that the data is valid
// as far as the Accessor is hold.
iterator find(const key_type& value) {
return iterator(sl_->find(value));
}
const_iterator find(const key_type& value) const {
return iterator(sl_->find(value));
}
size_type count(const key_type& data) const {
return contains(data);
}
iterator begin() const {
NodeType* head = sl_->head_.load(std::memory_order_consume);
return iterator(head->next());
}
iterator end() const {
return iterator(nullptr);
}
const_iterator cbegin() const {
return begin();
}
const_iterator cend() const {
return end();
}
template <
typename U,
typename =
typename std::enable_if<std::is_convertible<U, T>::value>::type>
std::pair<iterator, bool> insert(U&& data) {
auto ret = sl_->addOrGetData(std::forward<U>(data));
return std::make_pair(iterator(ret.first), ret.second);
}
size_t erase(const key_type& data) {
return remove(data);
}
iterator lower_bound(const key_type& data) const {
return iterator(sl_->lower_bound(data));
}
size_t height() const {
return sl_->height();
}
// first() returns pointer to the first element in the skiplist, or
// nullptr if empty.
//
// last() returns the pointer to the last element in the skiplist,
// nullptr if list is empty.
//
// Note: As concurrent writing can happen, first() is not
// guaranteed to be the min_element() in the list. Similarly
// last() is not guaranteed to be the max_element(), and both of them can
// be invalid (i.e. nullptr), so we name them differently from front() and
// tail() here.
const key_type* first() const {
return sl_->first();
}
const key_type* last() const {
return sl_->last();
}
// Try to remove the last element in the skip list.
//
// Returns true if we removed it, false if either the list is empty
// or a race condition happened (i.e. the used-to-be last element
// was already removed by another thread).
bool pop_back() {
auto last = sl_->last();
return last ? sl_->remove(*last) : false;
}
std::pair<key_type*, bool> addOrGetData(const key_type& data) {
auto ret = sl_->addOrGetData(data);
return std::make_pair(&ret.first->data(), ret.second);
}
SkipListType* skiplist() const {
return sl_;
}
// legacy interfaces
// TODO:(xliu) remove these.
// Returns true if the node is added successfully, false if not, i.e. the
// node with the same key already existed in the list.
bool contains(const key_type& data) const {
return sl_->find(data);
}
bool add(const key_type& data) {
return sl_->addOrGetData(data).second;
}
bool remove(const key_type& data) {
return sl_->remove(data);
}
private:
SkipListType* sl_;
std::shared_ptr<SkipListType> slHolder_;
};
// implements forward iterator concept.
template <typename ValT, typename NodeT>
class detail::csl_iterator : public boost::iterator_facade<
csl_iterator<ValT, NodeT>,
ValT,
boost::forward_traversal_tag> {
public:
typedef ValT value_type;
typedef value_type& reference;
typedef value_type* pointer;
typedef ptrdiff_t difference_type;
explicit csl_iterator(NodeT* node = nullptr) : node_(node) {}
template <typename OtherVal, typename OtherNode>
csl_iterator(
const csl_iterator<OtherVal, OtherNode>& other,
typename std::enable_if<
std::is_convertible<OtherVal, ValT>::value>::type* = nullptr)
: node_(other.node_) {}
size_t nodeSize() const {
return node_ == nullptr ? 0
: node_->height() * sizeof(NodeT*) + sizeof(*this);
}
bool good() const {
return node_ != nullptr;
}
private:
friend class boost::iterator_core_access;
template <class, class>
friend class csl_iterator;
void increment() {
node_ = node_->next();
}
bool equal(const csl_iterator& other) const {
return node_ == other.node_;
}
value_type& dereference() const {
return node_->data();
}
NodeT* node_;
};
// Skipper interface
template <typename T, typename Comp, typename NodeAlloc, int MAX_HEIGHT>
class ConcurrentSkipList<T, Comp, NodeAlloc, MAX_HEIGHT>::Skipper {
typedef detail::SkipListNode<T> NodeType;
typedef ConcurrentSkipList<T, Comp, NodeAlloc, MAX_HEIGHT> SkipListType;
typedef typename SkipListType::Accessor Accessor;
public:
typedef T value_type;
typedef T& reference;
typedef T* pointer;
typedef ptrdiff_t difference_type;
Skipper(const std::shared_ptr<SkipListType>& skipList) : accessor_(skipList) {
init();
}
Skipper(const Accessor& accessor) : accessor_(accessor) {
init();
}
void init() {
// need to cache the head node
NodeType* head_node = head();
headHeight_ = head_node->height();
for (int i = 0; i < headHeight_; ++i) {
preds_[i] = head_node;
succs_[i] = head_node->skip(i);
}
int max_layer = maxLayer();
for (int i = 0; i < max_layer; ++i) {
hints_[i] = uint8_t(i + 1);
}
hints_[max_layer] = max_layer;
}
// advance to the next node in the list.
Skipper& operator++() {
preds_[0] = succs_[0];
succs_[0] = preds_[0]->skip(0);
int height = curHeight();
for (int i = 1; i < height && preds_[0] == succs_[i]; ++i) {
preds_[i] = succs_[i];
succs_[i] = preds_[i]->skip(i);
}
return *this;
}
bool good() const {
return succs_[0] != nullptr;
}
int maxLayer() const {
return headHeight_ - 1;
}
int curHeight() const {
// need to cap the height to the cached head height, as the current node
// might be some newly inserted node and also during the time period the
// head height may have grown.
return succs_[0] ? std::min(headHeight_, succs_[0]->height()) : 0;
}
const value_type& data() const {
DCHECK(succs_[0] != nullptr);
return succs_[0]->data();
}
value_type& operator*() const {
DCHECK(succs_[0] != nullptr);
return succs_[0]->data();
}
value_type* operator->() {
DCHECK(succs_[0] != nullptr);
return &succs_[0]->data();
}
/*
* Skip to the position whose data is no less than the parameter.
* (I.e. the lower_bound).
*
* Returns true if the data is found, false otherwise.
*/
bool to(const value_type& data) {
int layer = curHeight() - 1;
if (layer < 0) {
return false; // reaches the end of the list
}
int lyr = hints_[layer];
int max_layer = maxLayer();
while (SkipListType::greater(data, succs_[lyr]) && lyr < max_layer) {
++lyr;
}
hints_[layer] = lyr; // update the hint
int foundLayer = SkipListType::findInsertionPoint(
preds_[lyr], lyr, data, preds_, succs_);
if (foundLayer < 0) {
return false;
}
DCHECK(succs_[0] != nullptr)
<< "lyr=" << lyr << "; max_layer=" << max_layer;
return !succs_[0]->markedForRemoval();
}
private:
NodeType* head() const {
return accessor_.skiplist()->head_.load(std::memory_order_consume);
}
Accessor accessor_;
int headHeight_;
NodeType *succs_[MAX_HEIGHT], *preds_[MAX_HEIGHT];
uint8_t hints_[MAX_HEIGHT];
};
} // namespace folly

Просмотреть файл

@ -1,421 +0,0 @@
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cstdint>
#include <limits>
#include <type_traits>
namespace folly {
// TODO: Replace with std::equal_to, etc., after upgrading to C++14.
template <typename T>
struct constexpr_equal_to {
constexpr bool operator()(T const& a, T const& b) const {
return a == b;
}
};
template <typename T>
struct constexpr_not_equal_to {
constexpr bool operator()(T const& a, T const& b) const {
return a != b;
}
};
template <typename T>
struct constexpr_less {
constexpr bool operator()(T const& a, T const& b) const {
return a < b;
}
};
template <typename T>
struct constexpr_less_equal {
constexpr bool operator()(T const& a, T const& b) const {
return a <= b;
}
};
template <typename T>
struct constexpr_greater {
constexpr bool operator()(T const& a, T const& b) const {
return a > b;
}
};
template <typename T>
struct constexpr_greater_equal {
constexpr bool operator()(T const& a, T const& b) const {
return a >= b;
}
};
// TLDR: Prefer using operator< for ordering. And when
// a and b are equivalent objects, we return b to make
// sorting stable.
// See http://stepanovpapers.com/notes.pdf for details.
template <typename T>
constexpr T constexpr_max(T a) {
return a;
}
template <typename T, typename... Ts>
constexpr T constexpr_max(T a, T b, Ts... ts) {
return b < a ? constexpr_max(a, ts...) : constexpr_max(b, ts...);
}
// When a and b are equivalent objects, we return a to
// make sorting stable.
template <typename T>
constexpr T constexpr_min(T a) {
return a;
}
template <typename T, typename... Ts>
constexpr T constexpr_min(T a, T b, Ts... ts) {
return b < a ? constexpr_min(b, ts...) : constexpr_min(a, ts...);
}
template <typename T, typename Less>
constexpr T const&
constexpr_clamp(T const& v, T const& lo, T const& hi, Less less) {
return less(v, lo) ? lo : less(hi, v) ? hi : v;
}
template <typename T>
constexpr T const& constexpr_clamp(T const& v, T const& lo, T const& hi) {
return constexpr_clamp(v, lo, hi, constexpr_less<T>{});
}
namespace detail {
template <typename T, typename = void>
struct constexpr_abs_helper {};
template <typename T>
struct constexpr_abs_helper<
T,
typename std::enable_if<std::is_floating_point<T>::value>::type> {
static constexpr T go(T t) {
return t < static_cast<T>(0) ? -t : t;
}
};
template <typename T>
struct constexpr_abs_helper<
T,
typename std::enable_if<
std::is_integral<T>::value && !std::is_same<T, bool>::value &&
std::is_unsigned<T>::value>::type> {
static constexpr T go(T t) {
return t;
}
};
template <typename T>
struct constexpr_abs_helper<
T,
typename std::enable_if<
std::is_integral<T>::value && !std::is_same<T, bool>::value &&
std::is_signed<T>::value>::type> {
static constexpr typename std::make_unsigned<T>::type go(T t) {
return typename std::make_unsigned<T>::type(t < static_cast<T>(0) ? -t : t);
}
};
} // namespace detail
template <typename T>
constexpr auto constexpr_abs(T t)
-> decltype(detail::constexpr_abs_helper<T>::go(t)) {
return detail::constexpr_abs_helper<T>::go(t);
}
namespace detail {
template <typename T>
constexpr T constexpr_log2_(T a, T e) {
return e == T(1) ? a : constexpr_log2_(a + T(1), e / T(2));
}
template <typename T>
constexpr T constexpr_log2_ceil_(T l2, T t) {
return l2 + T(T(1) << l2 < t ? 1 : 0);
}
template <typename T>
constexpr T constexpr_square_(T t) {
return t * t;
}
} // namespace detail
template <typename T>
constexpr T constexpr_log2(T t) {
return detail::constexpr_log2_(T(0), t);
}
template <typename T>
constexpr T constexpr_log2_ceil(T t) {
return detail::constexpr_log2_ceil_(constexpr_log2(t), t);
}
template <typename T>
constexpr T constexpr_ceil(T t, T round) {
return round == T(0)
? t
: ((t + (t < T(0) ? T(0) : round - T(1))) / round) * round;
}
template <typename T>
constexpr T constexpr_pow(T base, std::size_t exp) {
return exp == 0
? T(1)
: exp == 1 ? base
: detail::constexpr_square_(constexpr_pow(base, exp / 2)) *
(exp % 2 ? base : T(1));
}
/// constexpr_find_last_set
///
/// Return the 1-based index of the most significant bit which is set.
/// For x > 0, constexpr_find_last_set(x) == 1 + floor(log2(x)).
template <typename T>
constexpr std::size_t constexpr_find_last_set(T const t) {
using U = std::make_unsigned_t<T>;
return t == T(0) ? 0 : 1 + constexpr_log2(static_cast<U>(t));
}
namespace detail {
template <typename U>
constexpr std::size_t
constexpr_find_first_set_(std::size_t s, std::size_t a, U const u) {
return s == 0 ? a
: constexpr_find_first_set_(
s / 2, a + s * bool((u >> a) % (U(1) << s) == U(0)), u);
}
} // namespace detail
/// constexpr_find_first_set
///
/// Return the 1-based index of the least significant bit which is set.
/// For x > 0, the exponent in the largest power of two which does not divide x.
template <typename T>
constexpr std::size_t constexpr_find_first_set(T t) {
using U = std::make_unsigned_t<T>;
using size = std::integral_constant<std::size_t, sizeof(T) * 4>;
return t == T(0)
? 0
: 1 + detail::constexpr_find_first_set_(size{}, 0, static_cast<U>(t));
}
template <typename T>
constexpr T constexpr_add_overflow_clamped(T a, T b) {
using L = std::numeric_limits<T>;
using M = std::intmax_t;
static_assert(
!std::is_integral<T>::value || sizeof(T) <= sizeof(M),
"Integral type too large!");
// clang-format off
return
// don't do anything special for non-integral types.
!std::is_integral<T>::value ? a + b :
// for narrow integral types, just convert to intmax_t.
sizeof(T) < sizeof(M)
? T(constexpr_clamp(M(a) + M(b), M(L::min()), M(L::max()))) :
// when a >= 0, cannot add more than `MAX - a` onto a.
!(a < 0) ? a + constexpr_min(b, T(L::max() - a)) :
// a < 0 && b >= 0, `a + b` will always be in valid range of type T.
!(b < 0) ? a + b :
// a < 0 && b < 0, keep the result >= MIN.
a + constexpr_max(b, T(L::min() - a));
// clang-format on
}
template <typename T>
constexpr T constexpr_sub_overflow_clamped(T a, T b) {
using L = std::numeric_limits<T>;
using M = std::intmax_t;
static_assert(
!std::is_integral<T>::value || sizeof(T) <= sizeof(M),
"Integral type too large!");
// clang-format off
return
// don't do anything special for non-integral types.
!std::is_integral<T>::value ? a - b :
// for unsigned type, keep result >= 0.
std::is_unsigned<T>::value ? (a < b ? 0 : a - b) :
// for narrow signed integral types, just convert to intmax_t.
sizeof(T) < sizeof(M)
? T(constexpr_clamp(M(a) - M(b), M(L::min()), M(L::max()))) :
// (a >= 0 && b >= 0) || (a < 0 && b < 0), `a - b` will always be valid.
(a < 0) == (b < 0) ? a - b :
// MIN < b, so `-b` should be in valid range (-MAX <= -b <= MAX),
// convert subtraction to addition.
L::min() < b ? constexpr_add_overflow_clamped(a, T(-b)) :
// -b = -MIN = (MAX + 1) and a <= -1, result is in valid range.
a < 0 ? a - b :
// -b = -MIN = (MAX + 1) and a >= 0, result > MAX.
L::max();
// clang-format on
}
// clamp_cast<> provides sane numeric conversions from float point numbers to
// integral numbers, and between different types of integral numbers. It helps
// to avoid unexpected bugs introduced by bad conversion, and undefined behavior
// like overflow when casting float point numbers to integral numbers.
//
// When doing clamp_cast<Dst>(value), if `value` is in valid range of Dst,
// it will give correct result in Dst, equal to `value`.
//
// If `value` is outside the representable range of Dst, it will be clamped to
// MAX or MIN in Dst, instead of being undefined behavior.
//
// Float NaNs are converted to 0 in integral type.
//
// Here's some comparision with static_cast<>:
// (with FB-internal gcc-5-glibc-2.23 toolchain)
//
// static_cast<int32_t>(NaN) = 6
// clamp_cast<int32_t>(NaN) = 0
//
// static_cast<int32_t>(9999999999.0f) = -348639895
// clamp_cast<int32_t>(9999999999.0f) = 2147483647
//
// static_cast<int32_t>(2147483647.0f) = -348639895
// clamp_cast<int32_t>(2147483647.0f) = 2147483647
//
// static_cast<uint32_t>(4294967295.0f) = 0
// clamp_cast<uint32_t>(4294967295.0f) = 4294967295
//
// static_cast<uint32_t>(-1) = 4294967295
// clamp_cast<uint32_t>(-1) = 0
//
// static_cast<int16_t>(32768u) = -32768
// clamp_cast<int16_t>(32768u) = 32767
template <typename Dst, typename Src>
constexpr typename std::enable_if<std::is_integral<Src>::value, Dst>::type
constexpr_clamp_cast(Src src) {
static_assert(
std::is_integral<Dst>::value && sizeof(Dst) <= sizeof(int64_t),
"constexpr_clamp_cast can only cast into integral type (up to 64bit)");
using L = std::numeric_limits<Dst>;
// clang-format off
return
// Check if Src and Dst have same signedness.
std::is_signed<Src>::value == std::is_signed<Dst>::value
? (
// Src and Dst have same signedness. If sizeof(Src) <= sizeof(Dst),
// we can safely convert Src to Dst without any loss of accuracy.
sizeof(Src) <= sizeof(Dst) ? Dst(src) :
// If Src is larger in size, we need to clamp it to valid range in Dst.
Dst(constexpr_clamp(src, Src(L::min()), Src(L::max()))))
// Src and Dst have different signedness.
// Check if it's signed -> unsigend cast.
: std::is_signed<Src>::value && std::is_unsigned<Dst>::value
? (
// If src < 0, the result should be 0.
src < 0 ? Dst(0) :
// Otherwise, src >= 0. If src can fit into Dst, we can safely cast it
// without loss of accuracy.
sizeof(Src) <= sizeof(Dst) ? Dst(src) :
// If Src is larger in size than Dst, we need to ensure the result is
// at most Dst MAX.
Dst(constexpr_min(src, Src(L::max()))))
// It's unsigned -> signed cast.
: (
// Since Src is unsigned, and Dst is signed, Src can fit into Dst only
// when sizeof(Src) < sizeof(Dst).
sizeof(Src) < sizeof(Dst) ? Dst(src) :
// If Src does not fit into Dst, we need to ensure the result is at most
// Dst MAX.
Dst(constexpr_min(src, Src(L::max()))));
// clang-format on
}
namespace detail {
// Upper/lower bound values that could be accurately represented in both
// integral and float point types.
constexpr double kClampCastLowerBoundDoubleToInt64F = -9223372036854774784.0;
constexpr double kClampCastUpperBoundDoubleToInt64F = 9223372036854774784.0;
constexpr double kClampCastUpperBoundDoubleToUInt64F = 18446744073709549568.0;
constexpr float kClampCastLowerBoundFloatToInt32F = -2147483520.0f;
constexpr float kClampCastUpperBoundFloatToInt32F = 2147483520.0f;
constexpr float kClampCastUpperBoundFloatToUInt32F = 4294967040.0f;
// This works the same as constexpr_clamp, but the comparision are done in Src
// to prevent any implicit promotions.
template <typename D, typename S>
constexpr D constexpr_clamp_cast_helper(S src, S sl, S su, D dl, D du) {
return src < sl ? dl : (src > su ? du : D(src));
}
} // namespace detail
template <typename Dst, typename Src>
constexpr typename std::enable_if<std::is_floating_point<Src>::value, Dst>::type
constexpr_clamp_cast(Src src) {
static_assert(
std::is_integral<Dst>::value && sizeof(Dst) <= sizeof(int64_t),
"constexpr_clamp_cast can only cast into integral type (up to 64bit)");
using L = std::numeric_limits<Dst>;
// clang-format off
return
// Special case: cast NaN into 0.
// Using a trick here to portably check for NaN: f != f only if f is NaN.
// see: https://stackoverflow.com/a/570694
(src != src) ? Dst(0) :
// using `sizeof(Src) > sizeof(Dst)` as a heuristic that Dst can be
// represented in Src without loss of accuracy.
// see: https://en.wikipedia.org/wiki/Floating-point_arithmetic
sizeof(Src) > sizeof(Dst) ?
detail::constexpr_clamp_cast_helper(
src, Src(L::min()), Src(L::max()), L::min(), L::max()) :
// sizeof(Src) < sizeof(Dst) only happens when doing cast of
// 32bit float -> u/int64_t.
// Losslessly promote float into double, change into double -> u/int64_t.
sizeof(Src) < sizeof(Dst) ? (
src >= 0.0
? constexpr_clamp_cast<Dst>(
constexpr_clamp_cast<std::uint64_t>(double(src)))
: constexpr_clamp_cast<Dst>(
constexpr_clamp_cast<std::int64_t>(double(src)))) :
// The following are for sizeof(Src) == sizeof(Dst).
std::is_same<Src, double>::value && std::is_same<Dst, int64_t>::value ?
detail::constexpr_clamp_cast_helper(
double(src),
detail::kClampCastLowerBoundDoubleToInt64F,
detail::kClampCastUpperBoundDoubleToInt64F,
L::min(),
L::max()) :
std::is_same<Src, double>::value && std::is_same<Dst, uint64_t>::value ?
detail::constexpr_clamp_cast_helper(
double(src),
0.0,
detail::kClampCastUpperBoundDoubleToUInt64F,
L::min(),
L::max()) :
std::is_same<Src, float>::value && std::is_same<Dst, int32_t>::value ?
detail::constexpr_clamp_cast_helper(
float(src),
detail::kClampCastLowerBoundFloatToInt32F,
detail::kClampCastUpperBoundFloatToInt32F,
L::min(),
L::max()) :
detail::constexpr_clamp_cast_helper(
float(src),
0.0f,
detail::kClampCastUpperBoundFloatToUInt32F,
L::min(),
L::max());
// clang-format on
}
} // namespace folly

Просмотреть файл

@ -1,798 +0,0 @@
/*
* Copyright 2011-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/Conv.h>
#include <array>
namespace folly {
namespace detail {
namespace {
/**
* Finds the first non-digit in a string. The number of digits
* searched depends on the precision of the Tgt integral. Assumes the
* string starts with NO whitespace and NO sign.
*
* The semantics of the routine is:
* for (;; ++b) {
* if (b >= e || !isdigit(*b)) return b;
* }
*
* Complete unrolling marks bottom-line (i.e. entire conversion)
* improvements of 20%.
*/
inline const char* findFirstNonDigit(const char* b, const char* e) {
for (; b < e; ++b) {
auto const c = static_cast<unsigned>(*b) - '0';
if (c >= 10) {
break;
}
}
return b;
}
// Maximum value of number when represented as a string
template <class T>
struct MaxString {
static const char* const value;
};
template <>
const char* const MaxString<uint8_t>::value = "255";
template <>
const char* const MaxString<uint16_t>::value = "65535";
template <>
const char* const MaxString<uint32_t>::value = "4294967295";
#if __SIZEOF_LONG__ == 4
template <>
const char* const MaxString<unsigned long>::value = "4294967295";
#else
template <>
const char* const MaxString<unsigned long>::value = "18446744073709551615";
#endif
static_assert(
sizeof(unsigned long) >= 4,
"Wrong value for MaxString<unsigned long>::value,"
" please update.");
template <>
const char* const MaxString<unsigned long long>::value = "18446744073709551615";
static_assert(
sizeof(unsigned long long) >= 8,
"Wrong value for MaxString<unsigned long long>::value"
", please update.");
#if FOLLY_HAVE_INT128_T
template <>
const char* const MaxString<__uint128_t>::value =
"340282366920938463463374607431768211455";
#endif
/*
* Lookup tables that converts from a decimal character value to an integral
* binary value, shifted by a decimal "shift" multiplier.
* For all character values in the range '0'..'9', the table at those
* index locations returns the actual decimal value shifted by the multiplier.
* For all other values, the lookup table returns an invalid OOR value.
*/
// Out-of-range flag value, larger than the largest value that can fit in
// four decimal bytes (9999), but four of these added up together should
// still not overflow uint16_t.
constexpr int32_t OOR = 10000;
alignas(16) constexpr uint16_t shift1[] = {
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 0-9
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 10
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 20
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 30
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, 0, 1, // 40
2, 3, 4, 5, 6, 7, 8, 9, OOR, OOR,
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 60
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 70
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 80
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 90
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 100
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 110
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 120
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 130
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 140
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 150
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 160
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 170
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 180
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 190
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 200
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 210
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 220
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 230
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 240
OOR, OOR, OOR, OOR, OOR, OOR // 250
};
alignas(16) constexpr uint16_t shift10[] = {
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 0-9
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 10
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 20
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 30
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, 0, 10, // 40
20, 30, 40, 50, 60, 70, 80, 90, OOR, OOR,
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 60
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 70
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 80
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 90
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 100
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 110
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 120
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 130
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 140
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 150
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 160
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 170
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 180
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 190
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 200
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 210
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 220
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 230
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 240
OOR, OOR, OOR, OOR, OOR, OOR // 250
};
alignas(16) constexpr uint16_t shift100[] = {
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 0-9
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 10
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 20
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 30
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, 0, 100, // 40
200, 300, 400, 500, 600, 700, 800, 900, OOR, OOR,
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 60
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 70
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 80
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 90
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 100
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 110
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 120
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 130
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 140
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 150
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 160
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 170
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 180
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 190
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 200
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 210
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 220
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 230
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 240
OOR, OOR, OOR, OOR, OOR, OOR // 250
};
alignas(16) constexpr uint16_t shift1000[] = {
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 0-9
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 10
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 20
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 30
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, 0, 1000, // 40
2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, OOR, OOR,
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 60
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 70
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 80
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 90
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 100
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 110
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 120
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 130
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 140
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 150
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 160
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 170
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 180
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 190
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 200
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 210
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 220
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 230
OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, OOR, // 240
OOR, OOR, OOR, OOR, OOR, OOR // 250
};
struct ErrorString {
const char* string;
bool quote;
};
// Keep this in sync with ConversionCode in Conv.h
constexpr const std::array<
ErrorString,
static_cast<std::size_t>(ConversionCode::NUM_ERROR_CODES)>
kErrorStrings{{
{"Success", true},
{"Empty input string", true},
{"No digits found in input string", true},
{"Integer overflow when parsing bool (must be 0 or 1)", true},
{"Invalid value for bool", true},
{"Non-digit character found", true},
{"Invalid leading character", true},
{"Overflow during conversion", true},
{"Negative overflow during conversion", true},
{"Unable to convert string to floating point value", true},
{"Non-whitespace character found after end of conversion", true},
{"Overflow during arithmetic conversion", false},
{"Negative overflow during arithmetic conversion", false},
{"Loss of precision during arithmetic conversion", false},
}};
// Check if ASCII is really ASCII
using IsAscii =
bool_constant<'A' == 65 && 'Z' == 90 && 'a' == 97 && 'z' == 122>;
// The code in this file that uses tolower() really only cares about
// 7-bit ASCII characters, so we can take a nice shortcut here.
inline char tolower_ascii(char in) {
return IsAscii::value ? in | 0x20 : char(std::tolower(in));
}
inline bool bool_str_cmp(const char** b, size_t len, const char* value) {
// Can't use strncasecmp, since we want to ensure that the full value matches
const char* p = *b;
const char* e = *b + len;
const char* v = value;
while (*v != '\0') {
if (p == e || tolower_ascii(*p) != *v) { // value is already lowercase
return false;
}
++p;
++v;
}
*b = p;
return true;
}
} // namespace
Expected<bool, ConversionCode> str_to_bool(StringPiece* src) noexcept {
auto b = src->begin(), e = src->end();
for (;; ++b) {
if (b >= e) {
return makeUnexpected(ConversionCode::EMPTY_INPUT_STRING);
}
if (!std::isspace(*b)) {
break;
}
}
bool result;
size_t len = size_t(e - b);
switch (*b) {
case '0':
case '1': {
result = false;
for (; b < e && isdigit(*b); ++b) {
if (result || (*b != '0' && *b != '1')) {
return makeUnexpected(ConversionCode::BOOL_OVERFLOW);
}
result = (*b == '1');
}
break;
}
case 'y':
case 'Y':
result = true;
if (!bool_str_cmp(&b, len, "yes")) {
++b; // accept the single 'y' character
}
break;
case 'n':
case 'N':
result = false;
if (!bool_str_cmp(&b, len, "no")) {
++b;
}
break;
case 't':
case 'T':
result = true;
if (!bool_str_cmp(&b, len, "true")) {
++b;
}
break;
case 'f':
case 'F':
result = false;
if (!bool_str_cmp(&b, len, "false")) {
++b;
}
break;
case 'o':
case 'O':
if (bool_str_cmp(&b, len, "on")) {
result = true;
} else if (bool_str_cmp(&b, len, "off")) {
result = false;
} else {
return makeUnexpected(ConversionCode::BOOL_INVALID_VALUE);
}
break;
default:
return makeUnexpected(ConversionCode::BOOL_INVALID_VALUE);
}
src->assign(b, e);
return result;
}
/**
* StringPiece to double, with progress information. Alters the
* StringPiece parameter to munch the already-parsed characters.
*/
template <class Tgt>
Expected<Tgt, ConversionCode> str_to_floating(StringPiece* src) noexcept {
using namespace double_conversion;
static StringToDoubleConverter conv(
StringToDoubleConverter::ALLOW_TRAILING_JUNK |
StringToDoubleConverter::ALLOW_LEADING_SPACES,
0.0,
// return this for junk input string
std::numeric_limits<double>::quiet_NaN(),
nullptr,
nullptr);
if (src->empty()) {
return makeUnexpected(ConversionCode::EMPTY_INPUT_STRING);
}
int length;
auto result = conv.StringToDouble(
src->data(),
static_cast<int>(src->size()),
&length); // processed char count
if (!std::isnan(result)) {
// If we get here with length = 0, the input string is empty.
// If we get here with result = 0.0, it's either because the string
// contained only whitespace, or because we had an actual zero value
// (with potential trailing junk). If it was only whitespace, we
// want to raise an error; length will point past the last character
// that was processed, so we need to check if that character was
// whitespace or not.
if (length == 0 ||
(result == 0.0 && std::isspace((*src)[size_t(length) - 1]))) {
return makeUnexpected(ConversionCode::EMPTY_INPUT_STRING);
}
if (length >= 2) {
const char* suffix = src->data() + length - 1;
// double_conversion doesn't update length correctly when there is an
// incomplete exponent specifier. Converting "12e-f-g" shouldn't consume
// any more than "12", but it will consume "12e-".
// "123-" should only parse "123"
if (*suffix == '-' || *suffix == '+') {
--suffix;
--length;
}
// "12e-f-g" or "12euro" should only parse "12"
if (*suffix == 'e' || *suffix == 'E') {
--length;
}
}
src->advance(size_t(length));
return Tgt(result);
}
auto* e = src->end();
auto* b =
std::find_if_not(src->begin(), e, [](char c) { return std::isspace(c); });
// There must be non-whitespace, otherwise we would have caught this above
assert(b < e);
size_t size = size_t(e - b);
bool negative = false;
if (*b == '-') {
negative = true;
++b;
--size;
}
result = 0.0;
switch (tolower_ascii(*b)) {
case 'i':
if (size >= 3 && tolower_ascii(b[1]) == 'n' &&
tolower_ascii(b[2]) == 'f') {
if (size >= 8 && tolower_ascii(b[3]) == 'i' &&
tolower_ascii(b[4]) == 'n' && tolower_ascii(b[5]) == 'i' &&
tolower_ascii(b[6]) == 't' && tolower_ascii(b[7]) == 'y') {
b += 8;
} else {
b += 3;
}
result = std::numeric_limits<Tgt>::infinity();
}
break;
case 'n':
if (size >= 3 && tolower_ascii(b[1]) == 'a' &&
tolower_ascii(b[2]) == 'n') {
b += 3;
result = std::numeric_limits<Tgt>::quiet_NaN();
}
break;
default:
break;
}
if (result == 0.0) {
// All bets are off
return makeUnexpected(ConversionCode::STRING_TO_FLOAT_ERROR);
}
if (negative) {
result = -result;
}
src->assign(b, e);
return Tgt(result);
}
template Expected<float, ConversionCode> str_to_floating<float>(
StringPiece* src) noexcept;
template Expected<double, ConversionCode> str_to_floating<double>(
StringPiece* src) noexcept;
/**
* This class takes care of additional processing needed for signed values,
* like leading sign character and overflow checks.
*/
template <typename T, bool IsSigned = std::is_signed<T>::value>
class SignedValueHandler;
template <typename T>
class SignedValueHandler<T, true> {
public:
ConversionCode init(const char*& b) {
negative_ = false;
if (!std::isdigit(*b)) {
if (*b == '-') {
negative_ = true;
} else if (UNLIKELY(*b != '+')) {
return ConversionCode::INVALID_LEADING_CHAR;
}
++b;
}
return ConversionCode::SUCCESS;
}
ConversionCode overflow() {
return negative_ ? ConversionCode::NEGATIVE_OVERFLOW
: ConversionCode::POSITIVE_OVERFLOW;
}
template <typename U>
Expected<T, ConversionCode> finalize(U value) {
T rv;
if (negative_) {
FOLLY_PUSH_WARNING
FOLLY_MSVC_DISABLE_WARNING(4146) // unary minus operator applied to unsigned type, result still unsigned
rv = T(-value);
FOLLY_POP_WARNING
if (UNLIKELY(rv > 0)) {
return makeUnexpected(ConversionCode::NEGATIVE_OVERFLOW);
}
} else {
rv = T(value);
if (UNLIKELY(rv < 0)) {
return makeUnexpected(ConversionCode::POSITIVE_OVERFLOW);
}
}
return rv;
}
private:
bool negative_;
};
// For unsigned types, we don't need any extra processing
template <typename T>
class SignedValueHandler<T, false> {
public:
ConversionCode init(const char*&) {
return ConversionCode::SUCCESS;
}
ConversionCode overflow() {
return ConversionCode::POSITIVE_OVERFLOW;
}
Expected<T, ConversionCode> finalize(T value) {
return value;
}
};
/**
* String represented as a pair of pointers to char to signed/unsigned
* integrals. Assumes NO whitespace before or after, and also that the
* string is composed entirely of digits (and an optional sign only for
* signed types). String may be empty, in which case digits_to returns
* an appropriate error.
*/
template <class Tgt>
inline Expected<Tgt, ConversionCode> digits_to(
const char* b,
const char* const e) noexcept {
using UT = typename std::make_unsigned<Tgt>::type;
assert(b <= e);
SignedValueHandler<Tgt> sgn;
auto err = sgn.init(b);
if (UNLIKELY(err != ConversionCode::SUCCESS)) {
return makeUnexpected(err);
}
size_t size = size_t(e - b);
/* Although the string is entirely made of digits, we still need to
* check for overflow.
*/
if (size > std::numeric_limits<UT>::digits10) {
// Leading zeros?
if (b < e && *b == '0') {
for (++b;; ++b) {
if (b == e) {
return Tgt(0); // just zeros, e.g. "0000"
}
if (*b != '0') {
size = size_t(e - b);
break;
}
}
}
if (size > std::numeric_limits<UT>::digits10 &&
(size != std::numeric_limits<UT>::digits10 + 1 ||
strncmp(b, MaxString<UT>::value, size) > 0)) {
return makeUnexpected(sgn.overflow());
}
}
// Here we know that the number won't overflow when
// converted. Proceed without checks.
UT result = 0;
for (; e - b >= 4; b += 4) {
FOLLY_PUSH_WARNING
FOLLY_MSVC_DISABLE_WARNING(4309) // truncation of constant value
result *= static_cast<UT>(10000);
FOLLY_POP_WARNING
const int32_t r0 = shift1000[static_cast<size_t>(b[0])];
const int32_t r1 = shift100[static_cast<size_t>(b[1])];
const int32_t r2 = shift10[static_cast<size_t>(b[2])];
const int32_t r3 = shift1[static_cast<size_t>(b[3])];
const auto sum = r0 + r1 + r2 + r3;
if (sum >= OOR) {
goto outOfRange;
}
result += UT(sum);
}
switch (e - b) {
case 3: {
const int32_t r0 = shift100[static_cast<size_t>(b[0])];
const int32_t r1 = shift10[static_cast<size_t>(b[1])];
const int32_t r2 = shift1[static_cast<size_t>(b[2])];
const auto sum = r0 + r1 + r2;
if (sum >= OOR) {
goto outOfRange;
}
result = UT(1000 * result + sum);
break;
}
case 2: {
const int32_t r0 = shift10[static_cast<size_t>(b[0])];
const int32_t r1 = shift1[static_cast<size_t>(b[1])];
const auto sum = r0 + r1;
if (sum >= OOR) {
goto outOfRange;
}
result = UT(100 * result + sum);
break;
}
case 1: {
const int32_t sum = shift1[static_cast<size_t>(b[0])];
if (sum >= OOR) {
goto outOfRange;
}
result = UT(10 * result + sum);
break;
}
default:
assert(b == e);
if (size == 0) {
return makeUnexpected(ConversionCode::NO_DIGITS);
}
break;
}
return sgn.finalize(result);
outOfRange:
return makeUnexpected(ConversionCode::NON_DIGIT_CHAR);
}
template Expected<char, ConversionCode> digits_to<char>(
const char*,
const char*) noexcept;
template Expected<signed char, ConversionCode> digits_to<signed char>(
const char*,
const char*) noexcept;
template Expected<unsigned char, ConversionCode> digits_to<unsigned char>(
const char*,
const char*) noexcept;
template Expected<short, ConversionCode> digits_to<short>(
const char*,
const char*) noexcept;
template Expected<unsigned short, ConversionCode> digits_to<unsigned short>(
const char*,
const char*) noexcept;
template Expected<int, ConversionCode> digits_to<int>(
const char*,
const char*) noexcept;
template Expected<unsigned int, ConversionCode> digits_to<unsigned int>(
const char*,
const char*) noexcept;
template Expected<long, ConversionCode> digits_to<long>(
const char*,
const char*) noexcept;
template Expected<unsigned long, ConversionCode> digits_to<unsigned long>(
const char*,
const char*) noexcept;
template Expected<long long, ConversionCode> digits_to<long long>(
const char*,
const char*) noexcept;
template Expected<unsigned long long, ConversionCode>
digits_to<unsigned long long>(const char*, const char*) noexcept;
#if FOLLY_HAVE_INT128_T
template Expected<__int128, ConversionCode> digits_to<__int128>(
const char*,
const char*) noexcept;
template Expected<unsigned __int128, ConversionCode>
digits_to<unsigned __int128>(const char*, const char*) noexcept;
#endif
/**
* StringPiece to integrals, with progress information. Alters the
* StringPiece parameter to munch the already-parsed characters.
*/
template <class Tgt>
Expected<Tgt, ConversionCode> str_to_integral(StringPiece* src) noexcept {
using UT = typename std::make_unsigned<Tgt>::type;
auto b = src->data(), past = src->data() + src->size();
for (;; ++b) {
if (UNLIKELY(b >= past)) {
return makeUnexpected(ConversionCode::EMPTY_INPUT_STRING);
}
if (!std::isspace(*b)) {
break;
}
}
SignedValueHandler<Tgt> sgn;
auto err = sgn.init(b);
if (UNLIKELY(err != ConversionCode::SUCCESS)) {
return makeUnexpected(err);
}
FOLLY_PUSH_WARNING
FOLLY_MSVC_DISABLE_WARNING(4127) // conditional expression is constant
if (std::is_signed<Tgt>::value && UNLIKELY(b >= past)) {
FOLLY_POP_WARNING
return makeUnexpected(ConversionCode::NO_DIGITS);
}
if (UNLIKELY(!isdigit(*b))) {
return makeUnexpected(ConversionCode::NON_DIGIT_CHAR);
}
auto m = findFirstNonDigit(b + 1, past);
auto tmp = digits_to<UT>(b, m);
if (UNLIKELY(!tmp.hasValue())) {
return makeUnexpected(
tmp.error() == ConversionCode::POSITIVE_OVERFLOW ? sgn.overflow()
: tmp.error());
}
auto res = sgn.finalize(tmp.value());
if (res.hasValue()) {
src->advance(size_t(m - src->data()));
}
return res;
}
template Expected<char, ConversionCode> str_to_integral<char>(
StringPiece* src) noexcept;
template Expected<signed char, ConversionCode> str_to_integral<signed char>(
StringPiece* src) noexcept;
template Expected<unsigned char, ConversionCode> str_to_integral<unsigned char>(
StringPiece* src) noexcept;
template Expected<short, ConversionCode> str_to_integral<short>(
StringPiece* src) noexcept;
template Expected<unsigned short, ConversionCode>
str_to_integral<unsigned short>(StringPiece* src) noexcept;
template Expected<int, ConversionCode> str_to_integral<int>(
StringPiece* src) noexcept;
template Expected<unsigned int, ConversionCode> str_to_integral<unsigned int>(
StringPiece* src) noexcept;
template Expected<long, ConversionCode> str_to_integral<long>(
StringPiece* src) noexcept;
template Expected<unsigned long, ConversionCode> str_to_integral<unsigned long>(
StringPiece* src) noexcept;
template Expected<long long, ConversionCode> str_to_integral<long long>(
StringPiece* src) noexcept;
template Expected<unsigned long long, ConversionCode>
str_to_integral<unsigned long long>(StringPiece* src) noexcept;
#if FOLLY_HAVE_INT128_T
template Expected<__int128, ConversionCode> str_to_integral<__int128>(
StringPiece* src) noexcept;
template Expected<unsigned __int128, ConversionCode>
str_to_integral<unsigned __int128>(StringPiece* src) noexcept;
#endif
} // namespace detail
ConversionError makeConversionError(ConversionCode code, StringPiece input) {
using namespace detail;
static_assert(
std::is_unsigned<std::underlying_type<ConversionCode>::type>::value,
"ConversionCode should be unsigned");
assert((std::size_t)code < kErrorStrings.size());
const ErrorString& err = kErrorStrings[(std::size_t)code];
if (code == ConversionCode::EMPTY_INPUT_STRING && input.empty()) {
return {err.string, code};
}
std::string tmp(err.string);
tmp.append(": ");
if (err.quote) {
tmp.append(1, '"');
}
if (input.size() > 0) {
tmp.append(input.data(), input.size());
}
if (err.quote) {
tmp.append(1, '"');
}
return {tmp, code};
}
} // namespace folly

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,121 +0,0 @@
/*
* Copyright 2015-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* GCC compatible wrappers around clang attributes.
*
* @author Dominik Gabi
*/
#pragma once
#ifndef __has_attribute
#define FOLLY_HAS_ATTRIBUTE(x) 0
#else
#define FOLLY_HAS_ATTRIBUTE(x) __has_attribute(x)
#endif
#ifndef __has_cpp_attribute
#define FOLLY_HAS_CPP_ATTRIBUTE(x) 0
#else
#define FOLLY_HAS_CPP_ATTRIBUTE(x) __has_cpp_attribute(x)
#endif
#ifndef __has_extension
#define FOLLY_HAS_EXTENSION(x) 0
#else
#define FOLLY_HAS_EXTENSION(x) __has_extension(x)
#endif
/**
* Fallthrough to indicate that `break` was left out on purpose in a switch
* statement, e.g.
*
* switch (n) {
* case 22:
* case 33: // no warning: no statements between case labels
* f();
* case 44: // warning: unannotated fall-through
* g();
* FOLLY_FALLTHROUGH; // no warning: annotated fall-through
* }
*/
#if FOLLY_HAS_CPP_ATTRIBUTE(fallthrough)
#define FOLLY_FALLTHROUGH [[fallthrough]]
#elif FOLLY_HAS_CPP_ATTRIBUTE(clang::fallthrough)
#define FOLLY_FALLTHROUGH [[clang::fallthrough]]
#elif FOLLY_HAS_CPP_ATTRIBUTE(gnu::fallthrough)
#define FOLLY_FALLTHROUGH [[gnu::fallthrough]]
#else
#define FOLLY_FALLTHROUGH
#endif
/**
* Maybe_unused indicates that a function, variable or parameter might or
* might not be used, e.g.
*
* int foo(FOLLY_MAYBE_UNUSED int x) {
* #ifdef USE_X
* return x;
* #else
* return 0;
* #endif
* }
*/
#if FOLLY_HAS_CPP_ATTRIBUTE(maybe_unused)
#define FOLLY_MAYBE_UNUSED [[maybe_unused]]
#elif FOLLY_HAS_ATTRIBUTE(__unused__) || __GNUC__
#define FOLLY_MAYBE_UNUSED __attribute__((__unused__))
#else
#define FOLLY_MAYBE_UNUSED
#endif
/**
* Nullable indicates that a return value or a parameter may be a `nullptr`,
* e.g.
*
* int* FOLLY_NULLABLE foo(int* a, int* FOLLY_NULLABLE b) {
* if (*a > 0) { // safe dereference
* return nullptr;
* }
* if (*b < 0) { // unsafe dereference
* return *a;
* }
* if (b != nullptr && *b == 1) { // safe checked dereference
* return new int(1);
* }
* return nullptr;
* }
*/
#if FOLLY_HAS_EXTENSION(nullability)
#define FOLLY_NULLABLE _Nullable
#define FOLLY_NONNULL _Nonnull
#else
#define FOLLY_NULLABLE
#define FOLLY_NONNULL
#endif
/**
* "Cold" indicates to the compiler that a function is only expected to be
* called from unlikely code paths. It can affect decisions made by the
* optimizer both when processing the function body and when analyzing
* call-sites.
*/
#if __GNUC__
#define FOLLY_COLD __attribute__((__cold__))
#else
#define FOLLY_COLD
#endif

Просмотреть файл

@ -1,218 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cstdint>
#include <folly/Portability.h>
#ifdef _MSC_VER
#include <intrin.h>
#endif
namespace folly {
/**
* Identification of an Intel CPU.
* Supports CPUID feature flags (EAX=1) and extended features (EAX=7, ECX=0).
* Values from
* http://www.intel.com/content/www/us/en/processors/processor-identification-cpuid-instruction-note.html
*/
class CpuId {
public:
// Always inline in order for this to be usable from a __ifunc__.
// In shared library mode, a __ifunc__ runs at relocation time, while the
// PLT hasn't been fully populated yet; thus, ifuncs cannot use symbols
// with potentially external linkage. (This issue is less likely in opt
// mode since inlining happens more likely, and it doesn't happen for
// statically linked binaries which don't depend on the PLT)
FOLLY_ALWAYS_INLINE CpuId() {
#if defined(_MSC_VER) && (FOLLY_X64 || defined(_M_IX86))
int reg[4];
__cpuid(static_cast<int*>(reg), 0);
const int n = reg[0];
if (n >= 1) {
__cpuid(static_cast<int*>(reg), 1);
f1c_ = uint32_t(reg[2]);
f1d_ = uint32_t(reg[3]);
}
if (n >= 7) {
__cpuidex(static_cast<int*>(reg), 7, 0);
f7b_ = uint32_t(reg[1]);
f7c_ = uint32_t(reg[2]);
}
#elif defined(__i386__) && defined(__PIC__) && !defined(__clang__) && \
defined(__GNUC__)
// The following block like the normal cpuid branch below, but gcc
// reserves ebx for use of its pic register so we must specially
// handle the save and restore to avoid clobbering the register
uint32_t n;
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"popl %%ebx\n\t"
: "=a"(n)
: "a"(0)
: "ecx", "edx");
if (n >= 1) {
uint32_t f1a;
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"popl %%ebx\n\t"
: "=a"(f1a), "=c"(f1c_), "=d"(f1d_)
: "a"(1)
:);
}
if (n >= 7) {
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"movl %%ebx, %%eax\n\r"
"popl %%ebx"
: "=a"(f7b_), "=c"(f7c_)
: "a"(7), "c"(0)
: "edx");
}
#elif FOLLY_X64 || defined(__i386__)
uint32_t n;
__asm__("cpuid" : "=a"(n) : "a"(0) : "ebx", "ecx", "edx");
if (n >= 1) {
uint32_t f1a;
__asm__("cpuid" : "=a"(f1a), "=c"(f1c_), "=d"(f1d_) : "a"(1) : "ebx");
}
if (n >= 7) {
uint32_t f7a;
__asm__("cpuid"
: "=a"(f7a), "=b"(f7b_), "=c"(f7c_)
: "a"(7), "c"(0)
: "edx");
}
#endif
}
#define X(name, r, bit) \
FOLLY_ALWAYS_INLINE bool name() const { \
return ((r) & (1U << bit)) != 0; \
}
// cpuid(1): Processor Info and Feature Bits.
#define C(name, bit) X(name, f1c_, bit)
C(sse3, 0)
C(pclmuldq, 1)
C(dtes64, 2)
C(monitor, 3)
C(dscpl, 4)
C(vmx, 5)
C(smx, 6)
C(eist, 7)
C(tm2, 8)
C(ssse3, 9)
C(cnxtid, 10)
C(fma, 12)
C(cx16, 13)
C(xtpr, 14)
C(pdcm, 15)
C(pcid, 17)
C(dca, 18)
C(sse41, 19)
C(sse42, 20)
C(x2apic, 21)
C(movbe, 22)
C(popcnt, 23)
C(tscdeadline, 24)
C(aes, 25)
C(xsave, 26)
C(osxsave, 27)
C(avx, 28)
C(f16c, 29)
C(rdrand, 30)
#undef C
#define D(name, bit) X(name, f1d_, bit)
D(fpu, 0)
D(vme, 1)
D(de, 2)
D(pse, 3)
D(tsc, 4)
D(msr, 5)
D(pae, 6)
D(mce, 7)
D(cx8, 8)
D(apic, 9)
D(sep, 11)
D(mtrr, 12)
D(pge, 13)
D(mca, 14)
D(cmov, 15)
D(pat, 16)
D(pse36, 17)
D(psn, 18)
D(clfsh, 19)
D(ds, 21)
D(acpi, 22)
D(mmx, 23)
D(fxsr, 24)
D(sse, 25)
D(sse2, 26)
D(ss, 27)
D(htt, 28)
D(tm, 29)
D(pbe, 31)
#undef D
// cpuid(7): Extended Features.
#define B(name, bit) X(name, f7b_, bit)
B(bmi1, 3)
B(hle, 4)
B(avx2, 5)
B(smep, 7)
B(bmi2, 8)
B(erms, 9)
B(invpcid, 10)
B(rtm, 11)
B(mpx, 14)
B(avx512f, 16)
B(avx512dq, 17)
B(rdseed, 18)
B(adx, 19)
B(smap, 20)
B(avx512ifma, 21)
B(pcommit, 22)
B(clflushopt, 23)
B(clwb, 24)
B(avx512pf, 26)
B(avx512er, 27)
B(avx512cd, 28)
B(sha, 29)
B(avx512bw, 30)
B(avx512vl, 31)
#undef B
#define C(name, bit) X(name, f7c_, bit)
C(prefetchwt1, 0)
C(avx512vbmi, 1)
#undef C
#undef X
private:
uint32_t f1c_ = 0;
uint32_t f1d_ = 0;
uint32_t f7b_ = 0;
uint32_t f7c_ = 0;
};
} // namespace folly

Просмотреть файл

@ -1,150 +0,0 @@
/*
* Copyright 2018-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <future>
#include <glog/logging.h>
#include <folly/Executor.h>
#include <folly/synchronization/Baton.h>
namespace folly {
/// An Executor accepts units of work with add(), which should be
/// threadsafe.
class DefaultKeepAliveExecutor : public virtual Executor {
public:
DefaultKeepAliveExecutor() : Executor() {}
virtual ~DefaultKeepAliveExecutor() {
DCHECK(!keepAlive_);
}
folly::Executor::KeepAlive<> weakRef() {
return WeakRef::create(controlBlock_, this);
}
protected:
void joinKeepAlive() {
DCHECK(keepAlive_);
keepAlive_.reset();
keepAliveReleaseBaton_.wait();
}
private:
struct ControlBlock {
std::atomic<ssize_t> keepAliveCount_{1};
};
class WeakRef : public Executor {
public:
static folly::Executor::KeepAlive<> create(
std::shared_ptr<ControlBlock> controlBlock,
Executor* executor) {
return makeKeepAlive(new WeakRef(std::move(controlBlock), executor));
}
void add(Func f) override {
if (auto executor = lock()) {
executor->add(std::move(f));
}
}
void addWithPriority(Func f, int8_t priority) override {
if (auto executor = lock()) {
executor->addWithPriority(std::move(f), priority);
}
}
virtual uint8_t getNumPriorities() const override {
return numPriorities_;
}
private:
WeakRef(std::shared_ptr<ControlBlock> controlBlock, Executor* executor)
: controlBlock_(std::move(controlBlock)),
executor_(executor),
numPriorities_(executor->getNumPriorities()) {}
bool keepAliveAcquire() override {
auto keepAliveCount =
keepAliveCount_.fetch_add(1, std::memory_order_relaxed);
// We should never increment from 0
DCHECK(keepAliveCount > 0);
return true;
}
void keepAliveRelease() override {
auto keepAliveCount =
keepAliveCount_.fetch_sub(1, std::memory_order_acq_rel);
DCHECK(keepAliveCount >= 1);
if (keepAliveCount == 1) {
delete this;
}
}
folly::Executor::KeepAlive<> lock() {
auto controlBlock =
controlBlock_->keepAliveCount_.load(std::memory_order_relaxed);
do {
if (controlBlock == 0) {
return {};
}
} while (!controlBlock_->keepAliveCount_.compare_exchange_weak(
controlBlock,
controlBlock + 1,
std::memory_order_release,
std::memory_order_relaxed));
return makeKeepAlive(executor_);
}
std::atomic<size_t> keepAliveCount_{1};
std::shared_ptr<ControlBlock> controlBlock_;
Executor* executor_;
uint8_t numPriorities_;
};
bool keepAliveAcquire() override {
auto keepAliveCount =
controlBlock_->keepAliveCount_.fetch_add(1, std::memory_order_relaxed);
// We should never increment from 0
DCHECK(keepAliveCount > 0);
return true;
}
void keepAliveRelease() override {
auto keepAliveCount =
controlBlock_->keepAliveCount_.fetch_sub(1, std::memory_order_acquire);
DCHECK(keepAliveCount >= 1);
if (keepAliveCount == 1) {
keepAliveReleaseBaton_.post(); // std::memory_order_release
}
}
std::shared_ptr<ControlBlock> controlBlock_{std::make_shared<ControlBlock>()};
Baton<> keepAliveReleaseBaton_;
KeepAlive<DefaultKeepAliveExecutor> keepAlive_{
makeKeepAlive<DefaultKeepAliveExecutor>(this)};
};
} // namespace folly

Просмотреть файл

@ -1,131 +0,0 @@
/*
* Copyright 2014-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/Demangle.h>
#include <algorithm>
#include <cstring>
#include <folly/detail/Demangle.h>
#include <folly/portability/Config.h>
#if FOLLY_DETAIL_HAVE_DEMANGLE_H
#include <cxxabi.h>
#endif
namespace folly {
#if FOLLY_DETAIL_HAVE_DEMANGLE_H
fbstring demangle(const char* name) {
#ifdef FOLLY_DEMANGLE_MAX_SYMBOL_SIZE
// GCC's __cxa_demangle() uses on-stack data structures for the
// parser state which are linear in the number of components of the
// symbol. For extremely long symbols, this can cause a stack
// overflow. We set an arbitrary symbol length limit above which we
// just return the mangled name.
size_t mangledLen = strlen(name);
if (mangledLen > FOLLY_DEMANGLE_MAX_SYMBOL_SIZE) {
return fbstring(name, mangledLen);
}
#endif
int status;
size_t len = 0;
// malloc() memory for the demangled type name
char* demangled = abi::__cxa_demangle(name, nullptr, &len, &status);
if (status != 0) {
return name;
}
// len is the length of the buffer (including NUL terminator and maybe
// other junk)
return fbstring(demangled, strlen(demangled), len, AcquireMallocatedString());
}
namespace {
struct DemangleBuf {
char* dest;
size_t remaining;
size_t total;
};
void demangleCallback(const char* str, size_t size, void* p) {
DemangleBuf* buf = static_cast<DemangleBuf*>(p);
size_t n = std::min(buf->remaining, size);
memcpy(buf->dest, str, n);
buf->dest += n;
buf->remaining -= n;
buf->total += size;
}
} // namespace
size_t demangle(const char* name, char* out, size_t outSize) {
#ifdef FOLLY_DEMANGLE_MAX_SYMBOL_SIZE
size_t mangledLen = strlen(name);
if (mangledLen > FOLLY_DEMANGLE_MAX_SYMBOL_SIZE) {
if (outSize) {
size_t n = std::min(mangledLen, outSize - 1);
memcpy(out, name, n);
out[n] = '\0';
}
return mangledLen;
}
#endif
DemangleBuf dbuf;
dbuf.dest = out;
dbuf.remaining = outSize ? outSize - 1 : 0; // leave room for null term
dbuf.total = 0;
// Unlike most library functions, this returns 1 on success and 0 on failure
int status =
detail::cplus_demangle_v3_callback_wrapper(name, demangleCallback, &dbuf);
if (status == 0) { // failed, return original
return folly::strlcpy(out, name, outSize);
}
if (outSize != 0) {
*dbuf.dest = '\0';
}
return dbuf.total;
}
#else
fbstring demangle(const char* name) {
return name;
}
size_t demangle(const char* name, char* out, size_t outSize) {
return folly::strlcpy(out, name, outSize);
}
#endif
size_t strlcpy(char* dest, const char* const src, size_t size) {
size_t len = strlen(src);
if (size != 0) {
size_t n = std::min(len, size - 1); // always null terminate!
memcpy(dest, src, n);
dest[n] = '\0';
}
return len;
}
} // namespace folly

Просмотреть файл

@ -1,65 +0,0 @@
/*
* Copyright 2014-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <folly/FBString.h>
namespace folly {
/**
* Return the demangled (prettyfied) version of a C++ type.
*
* This function tries to produce a human-readable type, but the type name will
* be returned unchanged in case of error or if demangling isn't supported on
* your system.
*
* Use for debugging -- do not rely on demangle() returning anything useful.
*
* This function may allocate memory (and therefore throw std::bad_alloc).
*/
fbstring demangle(const char* name);
inline fbstring demangle(const std::type_info& type) {
return demangle(type.name());
}
/**
* Return the demangled (prettyfied) version of a C++ type in a user-provided
* buffer.
*
* The semantics are the same as for snprintf or strlcpy: bufSize is the size
* of the buffer, the string is always null-terminated, and the return value is
* the number of characters (not including the null terminator) that would have
* been written if the buffer was big enough. (So a return value >= bufSize
* indicates that the output was truncated)
*
* This function does not allocate memory and is async-signal-safe.
*
* Note that the underlying function for the fbstring-returning demangle is
* somewhat standard (abi::__cxa_demangle, which uses malloc), the underlying
* function for this version is less so (cplus_demangle_v3_callback from
* libiberty), so it is possible for the fbstring version to work, while this
* version returns the original, mangled name.
*/
size_t demangle(const char* name, char* buf, size_t bufSize);
inline size_t demangle(const std::type_info& type, char* buf, size_t bufSize) {
return demangle(type.name(), buf, bufSize);
}
// glibc doesn't have strlcpy
size_t strlcpy(char* dest, const char* const src, size_t size);
} // namespace folly

Просмотреть файл

@ -1,247 +0,0 @@
/*
* Copyright 2011-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Discriminated pointer: Type-safe pointer to one of several types.
*
* Similar to boost::variant, but has no space overhead over a raw pointer, as
* it relies on the fact that (on x86_64) there are 16 unused bits in a
* pointer.
*
* @author Tudor Bosman (tudorb@fb.com)
*/
#pragma once
#include <limits>
#include <stdexcept>
#include <glog/logging.h>
#include <folly/Likely.h>
#include <folly/Portability.h>
#include <folly/detail/DiscriminatedPtrDetail.h>
#if !FOLLY_X64 && !FOLLY_AARCH64 && !FOLLY_PPC64
#error "DiscriminatedPtr is x64, arm64 and ppc64 specific code."
#endif
namespace folly {
/**
* Discriminated pointer.
*
* Given a list of types, a DiscriminatedPtr<Types...> may point to an object
* of one of the given types, or may be empty. DiscriminatedPtr is type-safe:
* you may only get a pointer to the type that you put in, otherwise get
* throws an exception (and get_nothrow returns nullptr)
*
* This pointer does not do any kind of lifetime management -- it's not a
* "smart" pointer. You are responsible for deallocating any memory used
* to hold pointees, if necessary.
*/
template <typename... Types>
class DiscriminatedPtr {
// <, not <=, as our indexes are 1-based (0 means "empty")
static_assert(
sizeof...(Types) < std::numeric_limits<uint16_t>::max(),
"too many types");
public:
/**
* Create an empty DiscriminatedPtr.
*/
DiscriminatedPtr() : data_(0) {}
/**
* Create a DiscriminatedPtr that points to an object of type T.
* Fails at compile time if T is not a valid type (listed in Types)
*/
template <typename T>
explicit DiscriminatedPtr(T* ptr) {
set(ptr, typeIndex<T>());
}
/**
* Set this DiscriminatedPtr to point to an object of type T.
* Fails at compile time if T is not a valid type (listed in Types)
*/
template <typename T>
void set(T* ptr) {
set(ptr, typeIndex<T>());
}
/**
* Get a pointer to the object that this DiscriminatedPtr points to, if it is
* of type T. Fails at compile time if T is not a valid type (listed in
* Types), and returns nullptr if this DiscriminatedPtr is empty or points to
* an object of a different type.
*/
template <typename T>
T* get_nothrow() noexcept {
void* p = LIKELY(hasType<T>()) ? ptr() : nullptr;
return static_cast<T*>(p);
}
template <typename T>
const T* get_nothrow() const noexcept {
const void* p = LIKELY(hasType<T>()) ? ptr() : nullptr;
return static_cast<const T*>(p);
}
/**
* Get a pointer to the object that this DiscriminatedPtr points to, if it is
* of type T. Fails at compile time if T is not a valid type (listed in
* Types), and throws std::invalid_argument if this DiscriminatedPtr is empty
* or points to an object of a different type.
*/
template <typename T>
T* get() {
if (UNLIKELY(!hasType<T>())) {
throw std::invalid_argument("Invalid type");
}
return static_cast<T*>(ptr());
}
template <typename T>
const T* get() const {
if (UNLIKELY(!hasType<T>())) {
throw std::invalid_argument("Invalid type");
}
return static_cast<const T*>(ptr());
}
/**
* Return true iff this DiscriminatedPtr is empty.
*/
bool empty() const {
return index() == 0;
}
/**
* Return true iff the object pointed by this DiscriminatedPtr has type T,
* false otherwise. Fails at compile time if T is not a valid type (listed
* in Types...)
*/
template <typename T>
bool hasType() const {
return index() == typeIndex<T>();
}
/**
* Clear this DiscriminatedPtr, making it empty.
*/
void clear() {
data_ = 0;
}
/**
* Assignment operator from a pointer of type T.
*/
template <typename T>
DiscriminatedPtr& operator=(T* ptr) {
set(ptr);
return *this;
}
/**
* Apply a visitor to this object, calling the appropriate overload for
* the type currently stored in DiscriminatedPtr. Throws invalid_argument
* if the DiscriminatedPtr is empty.
*
* The visitor must meet the following requirements:
*
* - The visitor must allow invocation as a function by overloading
* operator(), unambiguously accepting all values of type T* (or const T*)
* for all T in Types...
* - All operations of the function object on T* (or const T*) must
* return the same type (or a static_assert will fire).
*/
template <typename V>
typename dptr_detail::VisitorResult<V, Types...>::type apply(V&& visitor) {
size_t n = index();
if (n == 0) {
throw std::invalid_argument("Empty DiscriminatedPtr");
}
return dptr_detail::ApplyVisitor<V, Types...>()(
n, std::forward<V>(visitor), ptr());
}
template <typename V>
typename dptr_detail::ConstVisitorResult<V, Types...>::type apply(
V&& visitor) const {
size_t n = index();
if (n == 0) {
throw std::invalid_argument("Empty DiscriminatedPtr");
}
return dptr_detail::ApplyConstVisitor<V, Types...>()(
n, std::forward<V>(visitor), ptr());
}
private:
/**
* Get the 1-based type index of T in Types.
*/
template <typename T>
uint16_t typeIndex() const {
return uint16_t(dptr_detail::GetTypeIndex<T, Types...>::value);
}
uint16_t index() const {
return data_ >> 48;
}
void* ptr() const {
return reinterpret_cast<void*>(data_ & ((1ULL << 48) - 1));
}
void set(void* p, uint16_t v) {
uintptr_t ip = reinterpret_cast<uintptr_t>(p);
CHECK(!(ip >> 48));
ip |= static_cast<uintptr_t>(v) << 48;
data_ = ip;
}
/**
* We store a pointer in the least significant 48 bits of data_, and a type
* index (0 = empty, or 1-based index in Types) in the most significant 16
* bits. We rely on the fact that pointers have their most significant 16
* bits clear on x86_64.
*/
uintptr_t data_;
};
template <typename Visitor, typename... Args>
decltype(auto) apply_visitor(
Visitor&& visitor,
const DiscriminatedPtr<Args...>& variant) {
return variant.apply(std::forward<Visitor>(visitor));
}
template <typename Visitor, typename... Args>
decltype(auto) apply_visitor(
Visitor&& visitor,
DiscriminatedPtr<Args...>& variant) {
return variant.apply(std::forward<Visitor>(visitor));
}
template <typename Visitor, typename... Args>
decltype(auto) apply_visitor(
Visitor&& visitor,
DiscriminatedPtr<Args...>&& variant) {
return variant.apply(std::forward<Visitor>(visitor));
}
} // namespace folly

Просмотреть файл

@ -1,403 +0,0 @@
/*
* Copyright 2012-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// @author Nicholas Ormrod <njormrod@fb.com>
#pragma once
#include <iterator>
#include <type_traits>
#include <boost/iterator/iterator_adaptor.hpp>
#include <boost/mpl/has_xxx.hpp>
#include <folly/Likely.h>
#include <folly/Optional.h>
#include <folly/Traits.h>
#include <folly/dynamic.h>
namespace folly {
template <typename T>
T convertTo(const dynamic&);
template <typename T>
dynamic toDynamic(const T&);
} // namespace folly
/**
* convertTo returns a well-typed representation of the input dynamic.
*
* Example:
*
* dynamic d = dynamic::array(
* dynamic::array(1, 2, 3),
* dynamic::array(4, 5)); // a vector of vector of int
* auto vvi = convertTo<fbvector<fbvector<int>>>(d);
*
* See docs/DynamicConverter.md for supported types and customization
*/
namespace folly {
///////////////////////////////////////////////////////////////////////////////
// traits
namespace dynamicconverter_detail {
BOOST_MPL_HAS_XXX_TRAIT_DEF(value_type)
BOOST_MPL_HAS_XXX_TRAIT_DEF(iterator)
BOOST_MPL_HAS_XXX_TRAIT_DEF(mapped_type)
BOOST_MPL_HAS_XXX_TRAIT_DEF(key_type)
template <typename T>
struct iterator_class_is_container {
typedef std::reverse_iterator<typename T::iterator> some_iterator;
enum {
value = has_value_type<T>::value &&
std::is_constructible<T, some_iterator, some_iterator>::value
};
};
template <typename T>
using class_is_container =
Conjunction<has_iterator<T>, iterator_class_is_container<T>>;
template <typename T>
using is_range = StrictConjunction<has_value_type<T>, has_iterator<T>>;
template <typename T>
using is_container = StrictConjunction<std::is_class<T>, class_is_container<T>>;
template <typename T>
using is_map = StrictConjunction<is_range<T>, has_mapped_type<T>>;
template <typename T>
using is_associative = StrictConjunction<is_range<T>, has_key_type<T>>;
} // namespace dynamicconverter_detail
///////////////////////////////////////////////////////////////////////////////
// custom iterators
/**
* We have iterators that dereference to dynamics, but need iterators
* that dereference to typename T.
*
* Implementation details:
* 1. We cache the value of the dereference operator. This is necessary
* because boost::iterator_adaptor requires *it to return a
* reference.
* 2. For const reasons, we cannot call operator= to refresh the
* cache: we must call the destructor then placement new.
*/
namespace dynamicconverter_detail {
template <typename T>
struct Dereferencer {
static inline void derefToCache(
Optional<T>* /* mem */,
const dynamic::const_item_iterator& /* it */) {
throw TypeError("array", dynamic::Type::OBJECT);
}
static inline void derefToCache(
Optional<T>* mem,
const dynamic::const_iterator& it) {
mem->emplace(convertTo<T>(*it));
}
};
template <typename F, typename S>
struct Dereferencer<std::pair<F, S>> {
static inline void derefToCache(
Optional<std::pair<F, S>>* mem,
const dynamic::const_item_iterator& it) {
mem->emplace(convertTo<F>(it->first), convertTo<S>(it->second));
}
// Intentional duplication of the code in Dereferencer
template <typename T>
static inline void derefToCache(
Optional<T>* mem,
const dynamic::const_iterator& it) {
mem->emplace(convertTo<T>(*it));
}
};
template <typename T, typename It>
class Transformer
: public boost::
iterator_adaptor<Transformer<T, It>, It, typename T::value_type> {
friend class boost::iterator_core_access;
typedef typename T::value_type ttype;
mutable Optional<ttype> cache_;
void increment() {
++this->base_reference();
cache_ = none;
}
ttype& dereference() const {
if (!cache_) {
Dereferencer<ttype>::derefToCache(&cache_, this->base_reference());
}
return cache_.value();
}
public:
explicit Transformer(const It& it) : Transformer::iterator_adaptor_(it) {}
};
// conversion factory
template <typename T, typename It>
inline std::move_iterator<Transformer<T, It>> conversionIterator(const It& it) {
return std::make_move_iterator(Transformer<T, It>(it));
}
} // namespace dynamicconverter_detail
///////////////////////////////////////////////////////////////////////////////
// DynamicConverter specializations
/**
* Each specialization of DynamicConverter has the function
* 'static T convert(const dynamic&);'
*/
// default - intentionally unimplemented
template <typename T, typename Enable = void>
struct DynamicConverter;
// boolean
template <>
struct DynamicConverter<bool> {
static bool convert(const dynamic& d) {
return d.asBool();
}
};
// integrals
template <typename T>
struct DynamicConverter<
T,
typename std::enable_if<
std::is_integral<T>::value && !std::is_same<T, bool>::value>::type> {
static T convert(const dynamic& d) {
return folly::to<T>(d.asInt());
}
};
// enums
template <typename T>
struct DynamicConverter<
T,
typename std::enable_if<std::is_enum<T>::value>::type> {
static T convert(const dynamic& d) {
using type = typename std::underlying_type<T>::type;
return static_cast<T>(DynamicConverter<type>::convert(d));
}
};
// floating point
template <typename T>
struct DynamicConverter<
T,
typename std::enable_if<std::is_floating_point<T>::value>::type> {
static T convert(const dynamic& d) {
return folly::to<T>(d.asDouble());
}
};
// fbstring
template <>
struct DynamicConverter<folly::fbstring> {
static folly::fbstring convert(const dynamic& d) {
return d.asString();
}
};
// std::string
template <>
struct DynamicConverter<std::string> {
static std::string convert(const dynamic& d) {
return d.asString();
}
};
// std::pair
template <typename F, typename S>
struct DynamicConverter<std::pair<F, S>> {
static std::pair<F, S> convert(const dynamic& d) {
if (d.isArray() && d.size() == 2) {
return std::make_pair(convertTo<F>(d[0]), convertTo<S>(d[1]));
} else if (d.isObject() && d.size() == 1) {
auto it = d.items().begin();
return std::make_pair(convertTo<F>(it->first), convertTo<S>(it->second));
} else {
throw TypeError("array (size 2) or object (size 1)", d.type());
}
}
};
// non-associative containers
template <typename C>
struct DynamicConverter<
C,
typename std::enable_if<
dynamicconverter_detail::is_container<C>::value &&
!dynamicconverter_detail::is_associative<C>::value>::type> {
static C convert(const dynamic& d) {
if (d.isArray()) {
return C(
dynamicconverter_detail::conversionIterator<C>(d.begin()),
dynamicconverter_detail::conversionIterator<C>(d.end()));
} else if (d.isObject()) {
return C(
dynamicconverter_detail::conversionIterator<C>(d.items().begin()),
dynamicconverter_detail::conversionIterator<C>(d.items().end()));
} else {
throw TypeError("object or array", d.type());
}
}
};
// associative containers
template <typename C>
struct DynamicConverter<
C,
typename std::enable_if<
dynamicconverter_detail::is_container<C>::value &&
dynamicconverter_detail::is_associative<C>::value>::type> {
static C convert(const dynamic& d) {
C ret; // avoid direct initialization due to unordered_map's constructor
// causing memory corruption if the iterator throws an exception
if (d.isArray()) {
ret.insert(
dynamicconverter_detail::conversionIterator<C>(d.begin()),
dynamicconverter_detail::conversionIterator<C>(d.end()));
} else if (d.isObject()) {
ret.insert(
dynamicconverter_detail::conversionIterator<C>(d.items().begin()),
dynamicconverter_detail::conversionIterator<C>(d.items().end()));
} else {
throw TypeError("object or array", d.type());
}
return ret;
}
};
///////////////////////////////////////////////////////////////////////////////
// DynamicConstructor specializations
/**
* Each specialization of DynamicConstructor has the function
* 'static dynamic construct(const C&);'
*/
// default
template <typename C, typename Enable = void>
struct DynamicConstructor {
static dynamic construct(const C& x) {
return dynamic(x);
}
};
// identity
template <typename C>
struct DynamicConstructor<
C,
typename std::enable_if<std::is_same<C, dynamic>::value>::type> {
static dynamic construct(const C& x) {
return x;
}
};
// maps
template <typename C>
struct DynamicConstructor<
C,
typename std::enable_if<
!std::is_same<C, dynamic>::value &&
dynamicconverter_detail::is_map<C>::value>::type> {
static dynamic construct(const C& x) {
dynamic d = dynamic::object;
for (const auto& pair : x) {
d.insert(toDynamic(pair.first), toDynamic(pair.second));
}
return d;
}
};
// other ranges
template <typename C>
struct DynamicConstructor<
C,
typename std::enable_if<
!std::is_same<C, dynamic>::value &&
!dynamicconverter_detail::is_map<C>::value &&
!std::is_constructible<StringPiece, const C&>::value &&
dynamicconverter_detail::is_range<C>::value>::type> {
static dynamic construct(const C& x) {
dynamic d = dynamic::array;
for (const auto& item : x) {
d.push_back(toDynamic(item));
}
return d;
}
};
// pair
template <typename A, typename B>
struct DynamicConstructor<std::pair<A, B>, void> {
static dynamic construct(const std::pair<A, B>& x) {
dynamic d = dynamic::array;
d.push_back(toDynamic(x.first));
d.push_back(toDynamic(x.second));
return d;
}
};
// vector<bool>
template <>
struct DynamicConstructor<std::vector<bool>, void> {
static dynamic construct(const std::vector<bool>& x) {
dynamic d = dynamic::array;
// Intentionally specifying the type as bool here.
// std::vector<bool>'s iterators return a proxy which is a prvalue
// and hence cannot bind to an lvalue reference such as auto&
for (bool item : x) {
d.push_back(toDynamic(item));
}
return d;
}
};
///////////////////////////////////////////////////////////////////////////////
// implementation
template <typename T>
T convertTo(const dynamic& d) {
return DynamicConverter<typename std::remove_cv<T>::type>::convert(d);
}
template <typename T>
dynamic toDynamic(const T& x) {
return DynamicConstructor<typename std::remove_cv<T>::type>::construct(x);
}
} // namespace folly

Просмотреть файл

@ -1,142 +0,0 @@
/*
* Copyright 2013-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <errno.h>
#include <cstdio>
#include <stdexcept>
#include <system_error>
#include <folly/Conv.h>
#include <folly/FBString.h>
#include <folly/Likely.h>
#include <folly/Portability.h>
namespace folly {
// Various helpers to throw appropriate std::system_error exceptions from C
// library errors (returned in errno, as positive return values (many POSIX
// functions), or as negative return values (Linux syscalls))
//
// The *Explicit functions take an explicit value for errno.
inline std::system_error makeSystemErrorExplicit(int err, const char* msg) {
// TODO: The C++ standard indicates that std::generic_category() should be
// used for POSIX errno codes.
//
// We should ideally change this to use std::generic_category() instead of
// std::system_category(). However, undertaking this change will require
// updating existing call sites that currently catch exceptions thrown by
// this code and currently expect std::system_category.
return std::system_error(err, std::system_category(), msg);
}
template <class... Args>
std::system_error makeSystemErrorExplicit(int err, Args&&... args) {
return makeSystemErrorExplicit(
err, to<fbstring>(std::forward<Args>(args)...).c_str());
}
inline std::system_error makeSystemError(const char* msg) {
return makeSystemErrorExplicit(errno, msg);
}
template <class... Args>
std::system_error makeSystemError(Args&&... args) {
return makeSystemErrorExplicit(errno, std::forward<Args>(args)...);
}
// Helper to throw std::system_error
[[noreturn]] inline void throwSystemErrorExplicit(int err, const char* msg) {
throw makeSystemErrorExplicit(err, msg);
}
template <class... Args>
[[noreturn]] void throwSystemErrorExplicit(int err, Args&&... args) {
throw makeSystemErrorExplicit(err, std::forward<Args>(args)...);
}
// Helper to throw std::system_error from errno and components of a string
template <class... Args>
[[noreturn]] void throwSystemError(Args&&... args) {
throwSystemErrorExplicit(errno, std::forward<Args>(args)...);
}
// Check a Posix return code (0 on success, error number on error), throw
// on error.
template <class... Args>
void checkPosixError(int err, Args&&... args) {
if (UNLIKELY(err != 0)) {
throwSystemErrorExplicit(err, std::forward<Args>(args)...);
}
}
// Check a Linux kernel-style return code (>= 0 on success, negative error
// number on error), throw on error.
template <class... Args>
void checkKernelError(ssize_t ret, Args&&... args) {
if (UNLIKELY(ret < 0)) {
throwSystemErrorExplicit(int(-ret), std::forward<Args>(args)...);
}
}
// Check a traditional Unix return code (-1 and sets errno on error), throw
// on error.
template <class... Args>
void checkUnixError(ssize_t ret, Args&&... args) {
if (UNLIKELY(ret == -1)) {
throwSystemError(std::forward<Args>(args)...);
}
}
template <class... Args>
void checkUnixErrorExplicit(ssize_t ret, int savedErrno, Args&&... args) {
if (UNLIKELY(ret == -1)) {
throwSystemErrorExplicit(savedErrno, std::forward<Args>(args)...);
}
}
// Check the return code from a fopen-style function (returns a non-nullptr
// FILE* on success, nullptr on error, sets errno). Works with fopen, fdopen,
// freopen, tmpfile, etc.
template <class... Args>
void checkFopenError(FILE* fp, Args&&... args) {
if (UNLIKELY(!fp)) {
throwSystemError(std::forward<Args>(args)...);
}
}
template <class... Args>
void checkFopenErrorExplicit(FILE* fp, int savedErrno, Args&&... args) {
if (UNLIKELY(!fp)) {
throwSystemErrorExplicit(savedErrno, std::forward<Args>(args)...);
}
}
/**
* If cond is not true, raise an exception of type E. E must have a ctor that
* works with const char* (a description of the failure).
*/
#define CHECK_THROW(cond, E) \
do { \
if (!(cond)) { \
throw E("Check failed: " #cond); \
} \
} while (0)
} // namespace folly

Просмотреть файл

@ -1,70 +0,0 @@
/*
* Copyright 2016-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <exception>
#include <string>
#include <type_traits>
#include <folly/Demangle.h>
#include <folly/FBString.h>
#include <folly/Portability.h>
namespace folly {
/**
* Debug string for an exception: include type and what(), if
* defined.
*/
inline fbstring exceptionStr(const std::exception& e) {
#ifdef FOLLY_HAS_RTTI
fbstring rv(demangle(typeid(e)));
rv += ": ";
#else
fbstring rv("Exception (no RTTI available): ");
#endif
rv += e.what();
return rv;
}
// Empirically, this indicates if the runtime supports
// std::exception_ptr, as not all (arm, for instance) do.
#if defined(__GNUC__) && defined(__GCC_ATOMIC_INT_LOCK_FREE) && \
__GCC_ATOMIC_INT_LOCK_FREE > 1
inline fbstring exceptionStr(std::exception_ptr ep) {
try {
std::rethrow_exception(ep);
} catch (const std::exception& e) {
return exceptionStr(e);
} catch (...) {
return "<unknown exception>";
}
}
#endif
template <typename E>
auto exceptionStr(const E& e) -> typename std::
enable_if<!std::is_base_of<std::exception, E>::value, fbstring>::type {
#ifdef FOLLY_HAS_RTTI
return demangle(typeid(e));
#else
(void)e;
return "Exception (no RTTI available) ";
#endif
}
} // namespace folly

Просмотреть файл

@ -1,677 +0,0 @@
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
*
* Author: Eric Niebler <eniebler@fb.com>
*/
#include <folly/Portability.h>
namespace folly {
template <class Fn>
struct exception_wrapper::arg_type_
: public arg_type_<decltype(&Fn::operator())> {};
template <class Ret, class Class, class Arg>
struct exception_wrapper::arg_type_<Ret (Class::*)(Arg)> {
using type = Arg;
};
template <class Ret, class Class, class Arg>
struct exception_wrapper::arg_type_<Ret (Class::*)(Arg) const> {
using type = Arg;
};
template <class Ret, class Arg>
struct exception_wrapper::arg_type_<Ret(Arg)> {
using type = Arg;
};
template <class Ret, class Arg>
struct exception_wrapper::arg_type_<Ret (*)(Arg)> {
using type = Arg;
};
template <class Ret, class Class>
struct exception_wrapper::arg_type_<Ret (Class::*)(...)> {
using type = AnyException;
};
template <class Ret, class Class>
struct exception_wrapper::arg_type_<Ret (Class::*)(...) const> {
using type = AnyException;
};
template <class Ret>
struct exception_wrapper::arg_type_<Ret(...)> {
using type = AnyException;
};
template <class Ret>
struct exception_wrapper::arg_type_<Ret (*)(...)> {
using type = AnyException;
};
template <class Ret, class... Args>
inline Ret exception_wrapper::noop_(Args...) {
return Ret();
}
inline std::type_info const* exception_wrapper::uninit_type_(
exception_wrapper const*) {
return &typeid(void);
}
template <class Ex, typename... As>
inline exception_wrapper::Buffer::Buffer(in_place_type_t<Ex>, As&&... as_) {
::new (static_cast<void*>(&buff_)) Ex(std::forward<As>(as_)...);
}
template <class Ex>
inline Ex& exception_wrapper::Buffer::as() noexcept {
return *static_cast<Ex*>(static_cast<void*>(&buff_));
}
template <class Ex>
inline Ex const& exception_wrapper::Buffer::as() const noexcept {
return *static_cast<Ex const*>(static_cast<void const*>(&buff_));
}
inline std::exception const* exception_wrapper::as_exception_or_null_(
std::exception const& ex) {
return &ex;
}
inline std::exception const* exception_wrapper::as_exception_or_null_(
AnyException) {
return nullptr;
}
static_assert(
!kMicrosoftAbiVer || (kMicrosoftAbiVer >= 1900 && kMicrosoftAbiVer <= 2000),
"exception_wrapper is untested and possibly broken on your version of "
"MSVC");
inline std::uintptr_t exception_wrapper::ExceptionPtr::as_int_(
std::exception_ptr const& ptr,
std::exception const& e) noexcept {
if (!kMicrosoftAbiVer) {
return reinterpret_cast<std::uintptr_t>(&e);
} else {
// On Windows, as of MSVC2017, all thrown exceptions are copied to the stack
// first. Thus, we cannot depend on exception references associated with an
// exception_ptr to be live for the duration of the exception_ptr. We need
// to directly access the heap allocated memory inside the exception_ptr.
//
// std::exception_ptr is an opaque reinterpret_cast of
// std::shared_ptr<__ExceptionPtr>
// __ExceptionPtr is a non-virtual class with two members, a union and a
// bool. The union contains the now-undocumented EHExceptionRecord, which
// contains a struct which contains a void* which points to the heap
// allocated exception.
// We derive the offset to pExceptionObject via manual means.
FOLLY_PACK_PUSH
struct Win32ExceptionPtr {
char offset[8 + 4 * sizeof(void*)];
void* exceptionObject;
} FOLLY_PACK_ATTR;
FOLLY_PACK_POP
auto* win32ExceptionPtr =
reinterpret_cast<std::shared_ptr<Win32ExceptionPtr> const*>(&ptr)
->get();
return reinterpret_cast<std::uintptr_t>(win32ExceptionPtr->exceptionObject);
}
}
inline std::uintptr_t exception_wrapper::ExceptionPtr::as_int_(
std::exception_ptr const&,
AnyException e) noexcept {
return reinterpret_cast<std::uintptr_t>(e.typeinfo_) + 1;
}
inline bool exception_wrapper::ExceptionPtr::has_exception_() const {
return 0 == exception_or_type_ % 2;
}
inline std::exception const* exception_wrapper::ExceptionPtr::as_exception_()
const {
return reinterpret_cast<std::exception const*>(exception_or_type_);
}
inline std::type_info const* exception_wrapper::ExceptionPtr::as_type_() const {
return reinterpret_cast<std::type_info const*>(exception_or_type_ - 1);
}
inline void exception_wrapper::ExceptionPtr::copy_(
exception_wrapper const* from,
exception_wrapper* to) {
::new (static_cast<void*>(&to->eptr_)) ExceptionPtr(from->eptr_);
}
inline void exception_wrapper::ExceptionPtr::move_(
exception_wrapper* from,
exception_wrapper* to) {
::new (static_cast<void*>(&to->eptr_)) ExceptionPtr(std::move(from->eptr_));
delete_(from);
}
inline void exception_wrapper::ExceptionPtr::delete_(exception_wrapper* that) {
that->eptr_.~ExceptionPtr();
that->vptr_ = &uninit_;
}
[[noreturn]] inline void exception_wrapper::ExceptionPtr::throw_(
exception_wrapper const* that) {
std::rethrow_exception(that->eptr_.ptr_);
}
inline std::type_info const* exception_wrapper::ExceptionPtr::type_(
exception_wrapper const* that) {
if (auto e = get_exception_(that)) {
return &typeid(*e);
}
return that->eptr_.as_type_();
}
inline std::exception const* exception_wrapper::ExceptionPtr::get_exception_(
exception_wrapper const* that) {
return that->eptr_.has_exception_() ? that->eptr_.as_exception_() : nullptr;
}
inline exception_wrapper exception_wrapper::ExceptionPtr::get_exception_ptr_(
exception_wrapper const* that) {
return *that;
}
template <class Ex>
inline void exception_wrapper::InPlace<Ex>::copy_(
exception_wrapper const* from,
exception_wrapper* to) {
::new (static_cast<void*>(std::addressof(to->buff_.as<Ex>())))
Ex(from->buff_.as<Ex>());
}
template <class Ex>
inline void exception_wrapper::InPlace<Ex>::move_(
exception_wrapper* from,
exception_wrapper* to) {
::new (static_cast<void*>(std::addressof(to->buff_.as<Ex>())))
Ex(std::move(from->buff_.as<Ex>()));
delete_(from);
}
template <class Ex>
inline void exception_wrapper::InPlace<Ex>::delete_(exception_wrapper* that) {
that->buff_.as<Ex>().~Ex();
that->vptr_ = &uninit_;
}
template <class Ex>
[[noreturn]] inline void exception_wrapper::InPlace<Ex>::throw_(
exception_wrapper const* that) {
throw that->buff_.as<Ex>(); // @nolint
}
template <class Ex>
inline std::type_info const* exception_wrapper::InPlace<Ex>::type_(
exception_wrapper const*) {
return &typeid(Ex);
}
template <class Ex>
inline std::exception const* exception_wrapper::InPlace<Ex>::get_exception_(
exception_wrapper const* that) {
return as_exception_or_null_(that->buff_.as<Ex>());
}
template <class Ex>
inline exception_wrapper exception_wrapper::InPlace<Ex>::get_exception_ptr_(
exception_wrapper const* that) {
try {
throw_(that);
} catch (Ex const& ex) {
return exception_wrapper{std::current_exception(), ex};
}
}
template <class Ex>
[[noreturn]] inline void exception_wrapper::SharedPtr::Impl<Ex>::throw_()
const {
throw ex_; // @nolint
}
template <class Ex>
inline std::exception const*
exception_wrapper::SharedPtr::Impl<Ex>::get_exception_() const noexcept {
return as_exception_or_null_(ex_);
}
template <class Ex>
inline exception_wrapper
exception_wrapper::SharedPtr::Impl<Ex>::get_exception_ptr_() const noexcept {
try {
throw_();
} catch (Ex& ex) {
return exception_wrapper{std::current_exception(), ex};
}
}
inline void exception_wrapper::SharedPtr::copy_(
exception_wrapper const* from,
exception_wrapper* to) {
::new (static_cast<void*>(std::addressof(to->sptr_))) SharedPtr(from->sptr_);
}
inline void exception_wrapper::SharedPtr::move_(
exception_wrapper* from,
exception_wrapper* to) {
::new (static_cast<void*>(std::addressof(to->sptr_)))
SharedPtr(std::move(from->sptr_));
delete_(from);
}
inline void exception_wrapper::SharedPtr::delete_(exception_wrapper* that) {
that->sptr_.~SharedPtr();
that->vptr_ = &uninit_;
}
[[noreturn]] inline void exception_wrapper::SharedPtr::throw_(
exception_wrapper const* that) {
that->sptr_.ptr_->throw_();
folly::assume_unreachable();
}
inline std::type_info const* exception_wrapper::SharedPtr::type_(
exception_wrapper const* that) {
return that->sptr_.ptr_->info_;
}
inline std::exception const* exception_wrapper::SharedPtr::get_exception_(
exception_wrapper const* that) {
return that->sptr_.ptr_->get_exception_();
}
inline exception_wrapper exception_wrapper::SharedPtr::get_exception_ptr_(
exception_wrapper const* that) {
return that->sptr_.ptr_->get_exception_ptr_();
}
template <class Ex, typename... As>
inline exception_wrapper::exception_wrapper(
ThrownTag,
in_place_type_t<Ex>,
As&&... as)
: eptr_{std::make_exception_ptr(Ex(std::forward<As>(as)...)),
reinterpret_cast<std::uintptr_t>(std::addressof(typeid(Ex))) + 1u},
vptr_(&ExceptionPtr::ops_) {}
template <class Ex, typename... As>
inline exception_wrapper::exception_wrapper(
OnHeapTag,
in_place_type_t<Ex>,
As&&... as)
: sptr_{std::make_shared<SharedPtr::Impl<Ex>>(std::forward<As>(as)...)},
vptr_(&SharedPtr::ops_) {}
template <class Ex, typename... As>
inline exception_wrapper::exception_wrapper(
InSituTag,
in_place_type_t<Ex>,
As&&... as)
: buff_{in_place_type<Ex>, std::forward<As>(as)...},
vptr_(&InPlace<Ex>::ops_) {}
inline exception_wrapper::exception_wrapper(exception_wrapper&& that) noexcept
: exception_wrapper{} {
(vptr_ = that.vptr_)->move_(&that, this); // Move into *this, won't throw
}
inline exception_wrapper::exception_wrapper(
exception_wrapper const& that) noexcept
: exception_wrapper{} {
that.vptr_->copy_(&that, this); // Copy into *this, won't throw
vptr_ = that.vptr_;
}
// If `this == &that`, this move assignment operator leaves the object in a
// valid but unspecified state.
inline exception_wrapper& exception_wrapper::operator=(
exception_wrapper&& that) noexcept {
vptr_->delete_(this); // Free the current exception
(vptr_ = that.vptr_)->move_(&that, this); // Move into *this, won't throw
return *this;
}
inline exception_wrapper& exception_wrapper::operator=(
exception_wrapper const& that) noexcept {
exception_wrapper(that).swap(*this);
return *this;
}
inline exception_wrapper::~exception_wrapper() {
reset();
}
template <class Ex>
inline exception_wrapper::exception_wrapper(
std::exception_ptr ptr,
Ex& ex) noexcept
: eptr_{ptr, ExceptionPtr::as_int_(ptr, ex)}, vptr_(&ExceptionPtr::ops_) {
assert(eptr_.ptr_);
}
namespace exception_wrapper_detail {
template <class Ex>
Ex&& dont_slice(Ex&& ex) {
assert(typeid(ex) == typeid(_t<std::decay<Ex>>) ||
!"Dynamic and static exception types don't match. Exception would "
"be sliced when storing in exception_wrapper.");
return std::forward<Ex>(ex);
}
} // namespace exception_wrapper_detail
template <
class Ex,
class Ex_,
FOLLY_REQUIRES_DEF(Conjunction<
exception_wrapper::IsStdException<Ex_>,
exception_wrapper::IsRegularExceptionType<Ex_>>::value)>
inline exception_wrapper::exception_wrapper(Ex&& ex)
: exception_wrapper{
PlacementOf<Ex_>{},
in_place_type<Ex_>,
exception_wrapper_detail::dont_slice(std::forward<Ex>(ex))} {}
template <
class Ex,
class Ex_,
FOLLY_REQUIRES_DEF(exception_wrapper::IsRegularExceptionType<Ex_>::value)>
inline exception_wrapper::exception_wrapper(in_place_t, Ex&& ex)
: exception_wrapper{
PlacementOf<Ex_>{},
in_place_type<Ex_>,
exception_wrapper_detail::dont_slice(std::forward<Ex>(ex))} {}
template <
class Ex,
typename... As,
FOLLY_REQUIRES_DEF(exception_wrapper::IsRegularExceptionType<Ex>::value)>
inline exception_wrapper::exception_wrapper(in_place_type_t<Ex>, As&&... as)
: exception_wrapper{PlacementOf<Ex>{},
in_place_type<Ex>,
std::forward<As>(as)...} {}
inline void exception_wrapper::swap(exception_wrapper& that) noexcept {
exception_wrapper tmp(std::move(that));
that = std::move(*this);
*this = std::move(tmp);
}
inline exception_wrapper::operator bool() const noexcept {
return vptr_ != &uninit_;
}
inline bool exception_wrapper::operator!() const noexcept {
return !static_cast<bool>(*this);
}
inline void exception_wrapper::reset() {
vptr_->delete_(this);
}
inline bool exception_wrapper::has_exception_ptr() const noexcept {
return vptr_ == &ExceptionPtr::ops_;
}
inline std::exception* exception_wrapper::get_exception() noexcept {
return const_cast<std::exception*>(vptr_->get_exception_(this));
}
inline std::exception const* exception_wrapper::get_exception() const noexcept {
return vptr_->get_exception_(this);
}
template <typename Ex>
inline Ex* exception_wrapper::get_exception() noexcept {
Ex* object{nullptr};
with_exception([&](Ex& ex) { object = &ex; });
return object;
}
template <typename Ex>
inline Ex const* exception_wrapper::get_exception() const noexcept {
Ex const* object{nullptr};
with_exception([&](Ex const& ex) { object = &ex; });
return object;
}
inline std::exception_ptr const&
exception_wrapper::to_exception_ptr() noexcept {
// Computing an exception_ptr is expensive so cache the result.
return (*this = vptr_->get_exception_ptr_(this)).eptr_.ptr_;
}
inline std::exception_ptr exception_wrapper::to_exception_ptr() const noexcept {
return vptr_->get_exception_ptr_(this).eptr_.ptr_;
}
inline std::type_info const& exception_wrapper::none() noexcept {
return typeid(void);
}
inline std::type_info const& exception_wrapper::unknown() noexcept {
return typeid(Unknown);
}
inline std::type_info const& exception_wrapper::type() const noexcept {
return *vptr_->type_(this);
}
inline folly::fbstring exception_wrapper::what() const {
if (auto e = get_exception()) {
return class_name() + ": " + e->what();
}
return class_name();
}
inline folly::fbstring exception_wrapper::class_name() const {
auto& ti = type();
return ti == none()
? ""
: ti == unknown() ? "<unknown exception>" : folly::demangle(ti);
}
template <class Ex>
inline bool exception_wrapper::is_compatible_with() const noexcept {
return with_exception([](Ex const&) {});
}
[[noreturn]] inline void exception_wrapper::throw_exception() const {
vptr_->throw_(this);
onNoExceptionError(__func__);
}
template <class Ex>
[[noreturn]] inline void exception_wrapper::throw_with_nested(Ex&& ex) const {
try {
throw_exception();
} catch (...) {
std::throw_with_nested(std::forward<Ex>(ex));
}
}
template <class CatchFn, bool IsConst>
struct exception_wrapper::ExceptionTypeOf {
using type = arg_type<_t<std::decay<CatchFn>>>;
static_assert(
std::is_reference<type>::value,
"Always catch exceptions by reference.");
static_assert(
!IsConst || std::is_const<_t<std::remove_reference<type>>>::value,
"handle() or with_exception() called on a const exception_wrapper "
"and asked to catch a non-const exception. Handler will never fire. "
"Catch exception by const reference to fix this.");
};
// Nests a throw in the proper try/catch blocks
template <bool IsConst>
struct exception_wrapper::HandleReduce {
bool* handled_;
template <
class ThrowFn,
class CatchFn,
FOLLY_REQUIRES(!IsCatchAll<CatchFn>::value)>
auto operator()(ThrowFn&& th, CatchFn& ca) const {
using Ex = _t<ExceptionTypeOf<CatchFn, IsConst>>;
return [th = std::forward<ThrowFn>(th), &ca, handled_ = handled_] {
try {
th();
} catch (Ex& e) {
// If we got here because a catch function threw, rethrow.
if (*handled_) {
throw;
}
*handled_ = true;
ca(e);
}
};
}
template <
class ThrowFn,
class CatchFn,
FOLLY_REQUIRES(IsCatchAll<CatchFn>::value)>
auto operator()(ThrowFn&& th, CatchFn& ca) const {
return [th = std::forward<ThrowFn>(th), &ca, handled_ = handled_] {
try {
th();
} catch (...) {
// If we got here because a catch function threw, rethrow.
if (*handled_) {
throw;
}
*handled_ = true;
ca();
}
};
}
};
// When all the handlers expect types derived from std::exception, we can
// sometimes invoke the handlers without throwing any exceptions.
template <bool IsConst>
struct exception_wrapper::HandleStdExceptReduce {
using StdEx = AddConstIf<IsConst, std::exception>;
template <
class ThrowFn,
class CatchFn,
FOLLY_REQUIRES(!IsCatchAll<CatchFn>::value)>
auto operator()(ThrowFn&& th, CatchFn& ca) const {
using Ex = _t<ExceptionTypeOf<CatchFn, IsConst>>;
return
[th = std::forward<ThrowFn>(th), &ca](auto&& continuation) -> StdEx* {
if (auto e = const_cast<StdEx*>(th(continuation))) {
if (auto e2 = dynamic_cast<_t<std::add_pointer<Ex>>>(e)) {
ca(*e2);
} else {
return e;
}
}
return nullptr;
};
}
template <
class ThrowFn,
class CatchFn,
FOLLY_REQUIRES(IsCatchAll<CatchFn>::value)>
auto operator()(ThrowFn&& th, CatchFn& ca) const {
return [th = std::forward<ThrowFn>(th), &ca](auto &&) -> StdEx* {
// The following continuation causes ca() to execute if *this contains
// an exception /not/ derived from std::exception.
auto continuation = [&ca](StdEx* e) {
return e != nullptr ? e : ((void)ca(), nullptr);
};
if (th(continuation) != nullptr) {
ca();
}
return nullptr;
};
}
};
// Called when some types in the catch clauses are not derived from
// std::exception.
template <class This, class... CatchFns>
inline void
exception_wrapper::handle_(std::false_type, This& this_, CatchFns&... fns) {
bool handled = false;
auto impl = exception_wrapper_detail::fold(
HandleReduce<std::is_const<This>::value>{&handled},
[&] { this_.throw_exception(); },
fns...);
impl();
}
// Called when all types in the catch clauses are either derived from
// std::exception or a catch-all clause.
template <class This, class... CatchFns>
inline void
exception_wrapper::handle_(std::true_type, This& this_, CatchFns&... fns) {
using StdEx = exception_wrapper_detail::
AddConstIf<std::is_const<This>::value, std::exception>;
auto impl = exception_wrapper_detail::fold(
HandleStdExceptReduce<std::is_const<This>::value>{},
[&](auto&& continuation) {
return continuation(
const_cast<StdEx*>(this_.vptr_->get_exception_(&this_)));
},
fns...);
// This continuation gets evaluated if CatchFns... does not include a
// catch-all handler. It is a no-op.
auto continuation = [](StdEx* ex) { return ex; };
if (nullptr != impl(continuation)) {
this_.throw_exception();
}
}
namespace exception_wrapper_detail {
template <class Ex, class Fn>
struct catch_fn {
Fn fn_;
auto operator()(Ex& ex) {
return fn_(ex);
}
};
template <class Ex, class Fn>
inline catch_fn<Ex, Fn> catch_(Ex*, Fn fn) {
return {std::move(fn)};
}
template <class Fn>
inline Fn catch_(void const*, Fn fn) {
return fn;
}
} // namespace exception_wrapper_detail
template <class Ex, class This, class Fn>
inline bool exception_wrapper::with_exception_(This& this_, Fn fn_) {
if (!this_) {
return false;
}
bool handled = true;
auto fn = exception_wrapper_detail::catch_(
static_cast<Ex*>(nullptr), std::move(fn_));
auto&& all = [&](...) { handled = false; };
handle_(IsStdException<arg_type<decltype(fn)>>{}, this_, fn, all);
return handled;
}
template <class Ex, class Fn>
inline bool exception_wrapper::with_exception(Fn fn) {
return with_exception_<Ex>(*this, std::move(fn));
}
template <class Ex, class Fn>
inline bool exception_wrapper::with_exception(Fn fn) const {
return with_exception_<Ex const>(*this, std::move(fn));
}
template <class... CatchFns>
inline void exception_wrapper::handle(CatchFns... fns) {
using AllStdEx =
exception_wrapper_detail::AllOf<IsStdException, arg_type<CatchFns>...>;
if (!*this) {
onNoExceptionError(__func__);
}
this->handle_(AllStdEx{}, *this, fns...);
}
template <class... CatchFns>
inline void exception_wrapper::handle(CatchFns... fns) const {
using AllStdEx =
exception_wrapper_detail::AllOf<IsStdException, arg_type<CatchFns>...>;
if (!*this) {
onNoExceptionError(__func__);
}
this->handle_(AllStdEx{}, *this, fns...);
}
} // namespace folly

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше