Autogenerated GPG tag for Rusty D1ADB8F1: 15EE 8D6C AB0E 7F0C F999 BFCB D920 0E6C D1AD B8F1
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPc+5PAAoJENkgDmzRrbjx8qwQAIRGDWGAJ7fiu8QBVbjycXJG
7828enxrbBQodNmc+uAkYvTv3KEoi8tlweMsk/lWDv8WovZV4IlQDEFCX/f4hWVY
S+2PmqJkN/alsG3dXd00zotK9mOJD+mQPAdjUBaNnRdp3QoV3YrjgihkWiL23DyT
dZTgqXdbUJkHk/d9YD1qcDvWdSr1EufSLYa52PhLJqYiYVk8zCdX82deJX1MWh64
v9I6htA73ORoX4JBGsFAOHO8fmLaq1yhBUMHOL4+gfEJVv4kSTU05GgepBHQP1fm
BbG2hN6G4vqqiqhV5A59+h271o/2d/KBGKx8/twRGk8tNJIwTIVnr/qcGuUfytC3
vA1fmq3vul0bzbqRgph8bGJyoVIg8CHjq24BFJQOXiQ1/6HOvjxnKBYs+3sVA829
ZYQYuEoRKmTsD3vv3nmcqAdZZDzehBQ499bEqDNsnQRLOjOVNag/pJSaENkeVC4T
CKYXt9BEabYnermPLdrjiabPE27GaEznX11SzCSXiWJsKX2kJnvz5RxVo8nlh1fc
/KQWJyWi/QVmAdy4eCJFp48513BqncHvKtPZ6zN9+Y6NHKmnmAqieZhh4yV/SCqi
EcK2oHQXmioKldn5DANQjeUCWlmEYXHbR08ahGRLNc7GZ1qKCgDr8+WEC0XYB/gQ
XLH3KKLM+VmvtonqjDV7
=W59/
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://github.com/rustyrussell/linux
Pull cpumask cleanups from Rusty Russell:
"(Somehow forgot to send this out; it's been sitting in linux-next, and
if you don't want it, it can sit there another cycle)"
I'm a sucker for things that actually delete lines of code.
Fix up trivial conflict in arch/arm/kernel/kprobes.c, where Rusty fixed
a user of &cpu_online_map to be cpu_online_mask, but that code got
deleted by commit b21d55e98a
("ARM: 7332/1: extract out code patch
function from kprobes").
* tag 'for-linus' of git://github.com/rustyrussell/linux:
cpumask: remove old cpu_*_map.
documentation: remove references to cpu_*_map.
drivers/cpufreq/db8500-cpufreq: remove references to cpu_*_map.
remove references to cpu_*_map in arch/
This commit is contained in:
Коммит
deb74f5ca1
|
@ -217,7 +217,7 @@ and name space for cpusets, with a minimum of additional kernel code.
|
|||
|
||||
The cpus and mems files in the root (top_cpuset) cpuset are
|
||||
read-only. The cpus file automatically tracks the value of
|
||||
cpu_online_map using a CPU hotplug notifier, and the mems file
|
||||
cpu_online_mask using a CPU hotplug notifier, and the mems file
|
||||
automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e.,
|
||||
nodes with memory--using the cpuset_track_online_nodes() hook.
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
|
|||
other cpus later online, read FAQ's for more info.
|
||||
|
||||
additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets
|
||||
cpu_possible_map = cpu_present_map + additional_cpus
|
||||
cpu_possible_mask = cpu_present_mask + additional_cpus
|
||||
|
||||
cede_offline={"off","on"} Use this option to disable/enable putting offlined
|
||||
processors to an extended H_CEDE state on
|
||||
|
@ -64,11 +64,11 @@ should only rely on this to count the # of cpus, but *MUST* not rely
|
|||
on the apicid values in those tables for disabled apics. In the event
|
||||
BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could
|
||||
use this parameter "additional_cpus=x" to represent those cpus in the
|
||||
cpu_possible_map.
|
||||
cpu_possible_mask.
|
||||
|
||||
possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus.
|
||||
This option sets possible_cpus bits in
|
||||
cpu_possible_map. Thus keeping the numbers of bits set
|
||||
cpu_possible_mask. Thus keeping the numbers of bits set
|
||||
constant even if the machine gets rebooted.
|
||||
|
||||
CPU maps and such
|
||||
|
@ -76,7 +76,7 @@ CPU maps and such
|
|||
[More on cpumaps and primitive to manipulate, please check
|
||||
include/linux/cpumask.h that has more descriptive text.]
|
||||
|
||||
cpu_possible_map: Bitmap of possible CPUs that can ever be available in the
|
||||
cpu_possible_mask: Bitmap of possible CPUs that can ever be available in the
|
||||
system. This is used to allocate some boot time memory for per_cpu variables
|
||||
that aren't designed to grow/shrink as CPUs are made available or removed.
|
||||
Once set during boot time discovery phase, the map is static, i.e no bits
|
||||
|
@ -84,13 +84,13 @@ are added or removed anytime. Trimming it accurately for your system needs
|
|||
upfront can save some boot time memory. See below for how we use heuristics
|
||||
in x86_64 case to keep this under check.
|
||||
|
||||
cpu_online_map: Bitmap of all CPUs currently online. Its set in __cpu_up()
|
||||
cpu_online_mask: Bitmap of all CPUs currently online. Its set in __cpu_up()
|
||||
after a cpu is available for kernel scheduling and ready to receive
|
||||
interrupts from devices. Its cleared when a cpu is brought down using
|
||||
__cpu_disable(), before which all OS services including interrupts are
|
||||
migrated to another target CPU.
|
||||
|
||||
cpu_present_map: Bitmap of CPUs currently present in the system. Not all
|
||||
cpu_present_mask: Bitmap of CPUs currently present in the system. Not all
|
||||
of them may be online. When physical hotplug is processed by the relevant
|
||||
subsystem (e.g ACPI) can change and new bit either be added or removed
|
||||
from the map depending on the event is hot-add/hot-remove. There are currently
|
||||
|
@ -99,22 +99,22 @@ at which time hotplug is disabled.
|
|||
|
||||
You really dont need to manipulate any of the system cpu maps. They should
|
||||
be read-only for most use. When setting up per-cpu resources almost always use
|
||||
cpu_possible_map/for_each_possible_cpu() to iterate.
|
||||
cpu_possible_mask/for_each_possible_cpu() to iterate.
|
||||
|
||||
Never use anything other than cpumask_t to represent bitmap of CPUs.
|
||||
|
||||
#include <linux/cpumask.h>
|
||||
|
||||
for_each_possible_cpu - Iterate over cpu_possible_map
|
||||
for_each_online_cpu - Iterate over cpu_online_map
|
||||
for_each_present_cpu - Iterate over cpu_present_map
|
||||
for_each_possible_cpu - Iterate over cpu_possible_mask
|
||||
for_each_online_cpu - Iterate over cpu_online_mask
|
||||
for_each_present_cpu - Iterate over cpu_present_mask
|
||||
for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
|
||||
|
||||
#include <linux/cpu.h>
|
||||
get_online_cpus() and put_online_cpus():
|
||||
|
||||
The above calls are used to inhibit cpu hotplug operations. While the
|
||||
cpu_hotplug.refcount is non zero, the cpu_online_map will not change.
|
||||
cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
|
||||
If you merely need to avoid cpus going away, you could also use
|
||||
preempt_disable() and preempt_enable() for those sections.
|
||||
Just remember the critical section cannot call any
|
||||
|
|
|
@ -450,7 +450,7 @@ setup_smp(void)
|
|||
smp_num_probed = 1;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_map = %lx\n",
|
||||
printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_mask = %lx\n",
|
||||
smp_num_probed, cpumask_bits(cpu_present_mask)[0]);
|
||||
}
|
||||
|
||||
|
|
|
@ -152,7 +152,7 @@ int __kprobes __arch_disarm_kprobe(void *p)
|
|||
|
||||
void __kprobes arch_disarm_kprobe(struct kprobe *p)
|
||||
{
|
||||
stop_machine(__arch_disarm_kprobe, p, &cpu_online_map);
|
||||
stop_machine(__arch_disarm_kprobe, p, cpu_online_mask);
|
||||
}
|
||||
|
||||
void __kprobes arch_remove_kprobe(struct kprobe *p)
|
||||
|
|
|
@ -349,7 +349,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
|
|||
* re-initialize the map in platform_smp_prepare_cpus() if
|
||||
* present != possible (e.g. physical hotplug).
|
||||
*/
|
||||
init_cpu_present(&cpu_possible_map);
|
||||
init_cpu_present(cpu_possible_mask);
|
||||
|
||||
/*
|
||||
* Initialise the SCU if there are more than one CPU
|
||||
|
@ -581,8 +581,9 @@ void smp_send_stop(void)
|
|||
unsigned long timeout;
|
||||
|
||||
if (num_online_cpus() > 1) {
|
||||
cpumask_t mask = cpu_online_map;
|
||||
cpu_clear(smp_processor_id(), mask);
|
||||
struct cpumask mask;
|
||||
cpumask_copy(&mask, cpu_online_mask);
|
||||
cpumask_clear_cpu(smp_processor_id(), &mask);
|
||||
|
||||
smp_cross_call(&mask, IPI_CPU_STOP);
|
||||
}
|
||||
|
|
|
@ -35,7 +35,7 @@
|
|||
#define BASE_IPI_IRQ 26
|
||||
|
||||
/*
|
||||
* cpu_possible_map needs to be filled out prior to setup_per_cpu_areas
|
||||
* cpu_possible_mask needs to be filled out prior to setup_per_cpu_areas
|
||||
* (which is prior to any of our smp_prepare_cpu crap), in order to set
|
||||
* up the... per_cpu areas.
|
||||
*/
|
||||
|
@ -208,7 +208,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
|
|||
stack_start = ((void *) thread) + THREAD_SIZE;
|
||||
__vmstart(start_secondary, stack_start);
|
||||
|
||||
while (!cpu_isset(cpu, cpu_online_map))
|
||||
while (!cpu_online(cpu))
|
||||
barrier();
|
||||
|
||||
return 0;
|
||||
|
@ -229,7 +229,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
|
|||
|
||||
/* Right now, let's just fake it. */
|
||||
for (i = 0; i < max_cpus; i++)
|
||||
cpu_set(i, cpu_present_map);
|
||||
set_cpu_present(i, true);
|
||||
|
||||
/* Also need to register the interrupts for IPI */
|
||||
if (max_cpus > 1)
|
||||
|
@ -269,5 +269,5 @@ void smp_start_cpus(void)
|
|||
int i;
|
||||
|
||||
for (i = 0; i < NR_CPUS; i++)
|
||||
cpu_set(i, cpu_possible_map);
|
||||
set_cpu_possible(i, true);
|
||||
}
|
||||
|
|
|
@ -839,7 +839,7 @@ static __init int setup_additional_cpus(char *s)
|
|||
early_param("additional_cpus", setup_additional_cpus);
|
||||
|
||||
/*
|
||||
* cpu_possible_map should be static, it cannot change as CPUs
|
||||
* cpu_possible_mask should be static, it cannot change as CPUs
|
||||
* are onlined, or offlined. The reason is per-cpu data-structures
|
||||
* are allocated by some modules at init time, and dont expect to
|
||||
* do this dynamically on cpu arrival/departure.
|
||||
|
|
|
@ -78,7 +78,7 @@ static inline void octeon_send_ipi_mask(const struct cpumask *mask,
|
|||
}
|
||||
|
||||
/**
|
||||
* Detect available CPUs, populate cpu_possible_map
|
||||
* Detect available CPUs, populate cpu_possible_mask
|
||||
*/
|
||||
static void octeon_smp_hotplug_setup(void)
|
||||
{
|
||||
|
@ -268,7 +268,7 @@ static int octeon_cpu_disable(void)
|
|||
|
||||
spin_lock(&smp_reserve_lock);
|
||||
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
set_cpu_online(cpu, false);
|
||||
cpu_clear(cpu, cpu_callin_map);
|
||||
local_irq_disable();
|
||||
fixup_irqs();
|
||||
|
|
|
@ -173,7 +173,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len,
|
|||
if (retval)
|
||||
goto out_unlock;
|
||||
|
||||
cpus_and(mask, p->thread.user_cpus_allowed, cpu_possible_map);
|
||||
cpumask_and(&mask, &p->thread.user_cpus_allowed, cpu_possible_mask);
|
||||
|
||||
out_unlock:
|
||||
read_unlock(&tasklist_lock);
|
||||
|
|
|
@ -25,7 +25,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
|
|||
int i;
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
if (!cpu_isset(n, cpu_online_map))
|
||||
if (!cpu_online(n))
|
||||
return 0;
|
||||
#endif
|
||||
|
||||
|
|
|
@ -317,7 +317,7 @@ static int bmips_cpu_disable(void)
|
|||
|
||||
pr_info("SMP: CPU%d is offline\n", cpu);
|
||||
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
set_cpu_online(cpu, false);
|
||||
cpu_clear(cpu, cpu_callin_map);
|
||||
|
||||
local_flush_tlb_all();
|
||||
|
|
|
@ -148,7 +148,7 @@ static void stop_this_cpu(void *dummy)
|
|||
/*
|
||||
* Remove this CPU:
|
||||
*/
|
||||
cpu_clear(smp_processor_id(), cpu_online_map);
|
||||
set_cpu_online(smp_processor_id(), false);
|
||||
for (;;) {
|
||||
if (cpu_wait)
|
||||
(*cpu_wait)(); /* Wait if available. */
|
||||
|
@ -174,7 +174,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
|
|||
mp_ops->prepare_cpus(max_cpus);
|
||||
set_cpu_sibling_map(0);
|
||||
#ifndef CONFIG_HOTPLUG_CPU
|
||||
init_cpu_present(&cpu_possible_map);
|
||||
init_cpu_present(cpu_possible_mask);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -248,7 +248,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
|
|||
while (!cpu_isset(cpu, cpu_callin_map))
|
||||
udelay(100);
|
||||
|
||||
cpu_set(cpu, cpu_online_map);
|
||||
set_cpu_online(cpu, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -320,13 +320,12 @@ void flush_tlb_mm(struct mm_struct *mm)
|
|||
if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
|
||||
smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
|
||||
} else {
|
||||
cpumask_t mask = cpu_online_map;
|
||||
unsigned int cpu;
|
||||
|
||||
cpu_clear(smp_processor_id(), mask);
|
||||
for_each_cpu_mask(cpu, mask)
|
||||
if (cpu_context(cpu, mm))
|
||||
for_each_online_cpu(cpu) {
|
||||
if (cpu != smp_processor_id() && cpu_context(cpu, mm))
|
||||
cpu_context(cpu, mm) = 0;
|
||||
}
|
||||
}
|
||||
local_flush_tlb_mm(mm);
|
||||
|
||||
|
@ -360,13 +359,12 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
|
|||
|
||||
smp_on_other_tlbs(flush_tlb_range_ipi, &fd);
|
||||
} else {
|
||||
cpumask_t mask = cpu_online_map;
|
||||
unsigned int cpu;
|
||||
|
||||
cpu_clear(smp_processor_id(), mask);
|
||||
for_each_cpu_mask(cpu, mask)
|
||||
if (cpu_context(cpu, mm))
|
||||
for_each_online_cpu(cpu) {
|
||||
if (cpu != smp_processor_id() && cpu_context(cpu, mm))
|
||||
cpu_context(cpu, mm) = 0;
|
||||
}
|
||||
}
|
||||
local_flush_tlb_range(vma, start, end);
|
||||
preempt_enable();
|
||||
|
@ -407,13 +405,12 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
|
|||
|
||||
smp_on_other_tlbs(flush_tlb_page_ipi, &fd);
|
||||
} else {
|
||||
cpumask_t mask = cpu_online_map;
|
||||
unsigned int cpu;
|
||||
|
||||
cpu_clear(smp_processor_id(), mask);
|
||||
for_each_cpu_mask(cpu, mask)
|
||||
if (cpu_context(cpu, vma->vm_mm))
|
||||
for_each_online_cpu(cpu) {
|
||||
if (cpu != smp_processor_id() && cpu_context(cpu, vma->vm_mm))
|
||||
cpu_context(cpu, vma->vm_mm) = 0;
|
||||
}
|
||||
}
|
||||
local_flush_tlb_page(vma, page);
|
||||
preempt_enable();
|
||||
|
|
|
@ -291,7 +291,7 @@ static void smtc_configure_tlb(void)
|
|||
* possibly leave some TCs/VPEs as "slave" processors.
|
||||
*
|
||||
* Use c0_MVPConf0 to find out how many TCs are available, setting up
|
||||
* cpu_possible_map and the logical/physical mappings.
|
||||
* cpu_possible_mask and the logical/physical mappings.
|
||||
*/
|
||||
|
||||
int __init smtc_build_cpu_map(int start_cpu_slot)
|
||||
|
|
|
@ -80,9 +80,9 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
|
|||
if (vma)
|
||||
mask = *mm_cpumask(vma->vm_mm);
|
||||
else
|
||||
mask = cpu_online_map;
|
||||
cpu_clear(cpu, mask);
|
||||
for_each_cpu_mask(cpu, mask)
|
||||
mask = *cpu_online_mask;
|
||||
cpumask_clear_cpu(cpu, &mask);
|
||||
for_each_cpu(cpu, &mask)
|
||||
octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
|
||||
|
||||
preempt_enable();
|
||||
|
|
|
@ -165,7 +165,7 @@ void __init nlm_smp_setup(void)
|
|||
cpu_set(boot_cpu, phys_cpu_present_map);
|
||||
__cpu_number_map[boot_cpu] = 0;
|
||||
__cpu_logical_map[0] = boot_cpu;
|
||||
cpu_set(0, cpu_possible_map);
|
||||
set_cpu_possible(0, true);
|
||||
|
||||
num_cpus = 1;
|
||||
for (i = 0; i < NR_CPUS; i++) {
|
||||
|
@ -177,14 +177,14 @@ void __init nlm_smp_setup(void)
|
|||
cpu_set(i, phys_cpu_present_map);
|
||||
__cpu_number_map[i] = num_cpus;
|
||||
__cpu_logical_map[num_cpus] = i;
|
||||
cpu_set(num_cpus, cpu_possible_map);
|
||||
set_cpu_possible(num_cpus, true);
|
||||
++num_cpus;
|
||||
}
|
||||
}
|
||||
|
||||
pr_info("Phys CPU present map: %lx, possible map %lx\n",
|
||||
(unsigned long)phys_cpu_present_map.bits[0],
|
||||
(unsigned long)cpu_possible_map.bits[0]);
|
||||
(unsigned long)cpumask_bits(cpu_possible_mask)[0]);
|
||||
|
||||
pr_info("Detected %i Slave CPU(s)\n", num_cpus);
|
||||
nlm_set_nmi_handler(nlm_boot_secondary_cpus);
|
||||
|
|
|
@ -146,7 +146,7 @@ static void __cpuinit yos_boot_secondary(int cpu, struct task_struct *idle)
|
|||
}
|
||||
|
||||
/*
|
||||
* Detect available CPUs, populate cpu_possible_map before smp_init
|
||||
* Detect available CPUs, populate cpu_possible_mask before smp_init
|
||||
*
|
||||
* We don't want to start the secondary CPU yet nor do we have a nice probing
|
||||
* feature in PMON so we just assume presence of the secondary core.
|
||||
|
@ -155,10 +155,10 @@ static void __init yos_smp_setup(void)
|
|||
{
|
||||
int i;
|
||||
|
||||
cpus_clear(cpu_possible_map);
|
||||
init_cpu_possible(cpu_none_mask);
|
||||
|
||||
for (i = 0; i < 2; i++) {
|
||||
cpu_set(i, cpu_possible_map);
|
||||
set_cpu_possible(i, true);
|
||||
__cpu_number_map[i] = i;
|
||||
__cpu_logical_map[i] = i;
|
||||
}
|
||||
|
@ -169,7 +169,7 @@ static void __init yos_prepare_cpus(unsigned int max_cpus)
|
|||
/*
|
||||
* Be paranoid. Enable the IPI only if we're really about to go SMP.
|
||||
*/
|
||||
if (cpus_weight(cpu_possible_map))
|
||||
if (num_possible_cpus())
|
||||
set_c0_status(STATUSF_IP5);
|
||||
}
|
||||
|
||||
|
|
|
@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, nasid_t nasid, int highest)
|
|||
/* Only let it join in if it's marked enabled */
|
||||
if ((acpu->cpu_info.flags & KLINFO_ENABLE) &&
|
||||
(tot_cpus_found != NR_CPUS)) {
|
||||
cpu_set(cpuid, cpu_possible_map);
|
||||
set_cpu_possible(cpuid, true);
|
||||
alloc_cpupda(cpuid, tot_cpus_found);
|
||||
cpus_found++;
|
||||
tot_cpus_found++;
|
||||
|
|
|
@ -138,7 +138,7 @@ static void __cpuinit bcm1480_boot_secondary(int cpu, struct task_struct *idle)
|
|||
|
||||
/*
|
||||
* Use CFE to find out how many CPUs are available, setting up
|
||||
* cpu_possible_map and the logical/physical mappings.
|
||||
* cpu_possible_mask and the logical/physical mappings.
|
||||
* XXXKW will the boot CPU ever not be physical 0?
|
||||
*
|
||||
* Common setup before any secondaries are started
|
||||
|
@ -147,14 +147,13 @@ static void __init bcm1480_smp_setup(void)
|
|||
{
|
||||
int i, num;
|
||||
|
||||
cpus_clear(cpu_possible_map);
|
||||
cpu_set(0, cpu_possible_map);
|
||||
init_cpu_possible(cpumask_of(0));
|
||||
__cpu_number_map[0] = 0;
|
||||
__cpu_logical_map[0] = 0;
|
||||
|
||||
for (i = 1, num = 0; i < NR_CPUS; i++) {
|
||||
if (cfe_cpu_stop(i) == 0) {
|
||||
cpu_set(i, cpu_possible_map);
|
||||
set_cpu_possible(i, true);
|
||||
__cpu_number_map[i] = ++num;
|
||||
__cpu_logical_map[num] = i;
|
||||
}
|
||||
|
|
|
@ -126,7 +126,7 @@ static void __cpuinit sb1250_boot_secondary(int cpu, struct task_struct *idle)
|
|||
|
||||
/*
|
||||
* Use CFE to find out how many CPUs are available, setting up
|
||||
* cpu_possible_map and the logical/physical mappings.
|
||||
* cpu_possible_mask and the logical/physical mappings.
|
||||
* XXXKW will the boot CPU ever not be physical 0?
|
||||
*
|
||||
* Common setup before any secondaries are started
|
||||
|
@ -135,14 +135,13 @@ static void __init sb1250_smp_setup(void)
|
|||
{
|
||||
int i, num;
|
||||
|
||||
cpus_clear(cpu_possible_map);
|
||||
cpu_set(0, cpu_possible_map);
|
||||
init_cpu_possible(cpumask_of(0));
|
||||
__cpu_number_map[0] = 0;
|
||||
__cpu_logical_map[0] = 0;
|
||||
|
||||
for (i = 1, num = 0; i < NR_CPUS; i++) {
|
||||
if (cfe_cpu_stop(i) == 0) {
|
||||
cpu_set(i, cpu_possible_map);
|
||||
set_cpu_possible(i, true);
|
||||
__cpu_number_map[i] = ++num;
|
||||
__cpu_logical_map[num] = i;
|
||||
}
|
||||
|
|
|
@ -104,11 +104,11 @@ static int irq_choose_cpu(const struct cpumask *affinity)
|
|||
{
|
||||
cpumask_t mask;
|
||||
|
||||
cpus_and(mask, cpu_online_map, *affinity);
|
||||
if (cpus_equal(mask, cpu_online_map) || cpus_empty(mask))
|
||||
cpumask_and(&mask, cpu_online_mask, affinity);
|
||||
if (cpumask_equal(&mask, cpu_online_mask) || cpumask_empty(&mask))
|
||||
return boot_cpu_id;
|
||||
else
|
||||
return first_cpu(mask);
|
||||
return cpumask_first(&mask);
|
||||
}
|
||||
#else
|
||||
#define irq_choose_cpu(affinity) boot_cpu_id
|
||||
|
|
|
@ -1100,7 +1100,7 @@ EXPORT_SYMBOL(hash_for_home_map);
|
|||
|
||||
/*
|
||||
* cpu_cacheable_map lists all the cpus whose caches the hypervisor can
|
||||
* flush on our behalf. It is set to cpu_possible_map OR'ed with
|
||||
* flush on our behalf. It is set to cpu_possible_mask OR'ed with
|
||||
* hash_for_home_map, and it is what should be passed to
|
||||
* hv_flush_remote() to flush all caches. Note that if there are
|
||||
* dedicated hypervisor driver tiles that have authorized use of their
|
||||
|
@ -1186,7 +1186,7 @@ static void __init setup_cpu_maps(void)
|
|||
sizeof(cpu_lotar_map));
|
||||
if (rc < 0) {
|
||||
pr_err("warning: no HV_INQ_TILES_LOTAR; using AVAIL\n");
|
||||
cpu_lotar_map = cpu_possible_map;
|
||||
cpu_lotar_map = *cpu_possible_mask;
|
||||
}
|
||||
|
||||
#if CHIP_HAS_CBOX_HOME_MAP()
|
||||
|
@ -1196,9 +1196,9 @@ static void __init setup_cpu_maps(void)
|
|||
sizeof(hash_for_home_map));
|
||||
if (rc < 0)
|
||||
early_panic("hv_inquire_tiles(HFH_CACHE) failed: rc %d\n", rc);
|
||||
cpumask_or(&cpu_cacheable_map, &cpu_possible_map, &hash_for_home_map);
|
||||
cpumask_or(&cpu_cacheable_map, cpu_possible_mask, &hash_for_home_map);
|
||||
#else
|
||||
cpu_cacheable_map = cpu_possible_map;
|
||||
cpu_cacheable_map = *cpu_possible_mask;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ static int __init start_kernel_proc(void *unused)
|
|||
cpu_tasks[0].pid = pid;
|
||||
cpu_tasks[0].task = current;
|
||||
#ifdef CONFIG_SMP
|
||||
cpu_online_map = cpumask_of_cpu(0);
|
||||
init_cpu_online(get_cpu_mask(0));
|
||||
#endif
|
||||
start_kernel();
|
||||
return 0;
|
||||
|
|
|
@ -76,7 +76,7 @@ static int idle_proc(void *cpup)
|
|||
cpu_relax();
|
||||
|
||||
notify_cpu_starting(cpu);
|
||||
cpu_set(cpu, cpu_online_map);
|
||||
set_cpu_online(cpu, true);
|
||||
default_idle();
|
||||
return 0;
|
||||
}
|
||||
|
@ -110,8 +110,7 @@ void smp_prepare_cpus(unsigned int maxcpus)
|
|||
for (i = 0; i < ncpus; ++i)
|
||||
set_cpu_possible(i, true);
|
||||
|
||||
cpu_clear(me, cpu_online_map);
|
||||
cpu_set(me, cpu_online_map);
|
||||
set_cpu_online(me, true);
|
||||
cpu_set(me, cpu_callin_map);
|
||||
|
||||
err = os_pipe(cpu_data[me].ipi_pipe, 1, 1);
|
||||
|
@ -138,13 +137,13 @@ void smp_prepare_cpus(unsigned int maxcpus)
|
|||
|
||||
void smp_prepare_boot_cpu(void)
|
||||
{
|
||||
cpu_set(smp_processor_id(), cpu_online_map);
|
||||
set_cpu_online(smp_processor_id(), true);
|
||||
}
|
||||
|
||||
int __cpu_up(unsigned int cpu)
|
||||
{
|
||||
cpu_set(cpu, smp_commenced_mask);
|
||||
while (!cpu_isset(cpu, cpu_online_map))
|
||||
while (!cpu_online(cpu))
|
||||
mb();
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -967,7 +967,7 @@ void xen_setup_shared_info(void)
|
|||
xen_setup_mfn_list_list();
|
||||
}
|
||||
|
||||
/* This is called once we have the cpu_possible_map */
|
||||
/* This is called once we have the cpu_possible_mask */
|
||||
void xen_setup_vcpu_info_placement(void)
|
||||
{
|
||||
int cpu;
|
||||
|
|
|
@ -142,7 +142,7 @@ static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy)
|
|||
policy->cpuinfo.transition_latency = 20 * 1000; /* in ns */
|
||||
|
||||
/* policy sharing between dual CPUs */
|
||||
cpumask_copy(policy->cpus, &cpu_present_map);
|
||||
cpumask_copy(policy->cpus, cpu_present_mask);
|
||||
|
||||
policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
|
||||
|
||||
|
|
|
@ -764,12 +764,6 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
|
|||
*
|
||||
*/
|
||||
#ifndef CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
|
||||
/* These strip const, as traditionally they weren't const. */
|
||||
#define cpu_possible_map (*(cpumask_t *)cpu_possible_mask)
|
||||
#define cpu_online_map (*(cpumask_t *)cpu_online_mask)
|
||||
#define cpu_present_map (*(cpumask_t *)cpu_present_mask)
|
||||
#define cpu_active_map (*(cpumask_t *)cpu_active_mask)
|
||||
|
||||
#define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu))
|
||||
|
||||
#define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS)
|
||||
|
|
|
@ -1414,8 +1414,8 @@ endif # MODULES
|
|||
config INIT_ALL_POSSIBLE
|
||||
bool
|
||||
help
|
||||
Back when each arch used to define their own cpu_online_map and
|
||||
cpu_possible_map, some of them chose to initialize cpu_possible_map
|
||||
Back when each arch used to define their own cpu_online_mask and
|
||||
cpu_possible_mask, some of them chose to initialize cpu_possible_mask
|
||||
with all 1s, and others with all 0s. When they were centralised,
|
||||
it was better to provide this option than to break all the archs
|
||||
and have several arch maintainers pursuing me down dark alleys.
|
||||
|
|
|
@ -270,11 +270,11 @@ static struct file_system_type cpuset_fs_type = {
|
|||
* are online. If none are online, walk up the cpuset hierarchy
|
||||
* until we find one that does have some online cpus. If we get
|
||||
* all the way to the top and still haven't found any online cpus,
|
||||
* return cpu_online_map. Or if passed a NULL cs from an exit'ing
|
||||
* task, return cpu_online_map.
|
||||
* return cpu_online_mask. Or if passed a NULL cs from an exit'ing
|
||||
* task, return cpu_online_mask.
|
||||
*
|
||||
* One way or another, we guarantee to return some non-empty subset
|
||||
* of cpu_online_map.
|
||||
* of cpu_online_mask.
|
||||
*
|
||||
* Call with callback_mutex held.
|
||||
*/
|
||||
|
@ -867,7 +867,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
|
|||
int retval;
|
||||
int is_load_balanced;
|
||||
|
||||
/* top_cpuset.cpus_allowed tracks cpu_online_map; it's read-only */
|
||||
/* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */
|
||||
if (cs == &top_cpuset)
|
||||
return -EACCES;
|
||||
|
||||
|
@ -2149,7 +2149,7 @@ void __init cpuset_init_smp(void)
|
|||
*
|
||||
* Description: Returns the cpumask_var_t cpus_allowed of the cpuset
|
||||
* attached to the specified @tsk. Guaranteed to return some non-empty
|
||||
* subset of cpu_online_map, even if this means going outside the
|
||||
* subset of cpu_online_mask, even if this means going outside the
|
||||
* tasks cpuset.
|
||||
**/
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче