Detailed Changelogs:
DATE: 14/09/2021
- arm64: configs: sakura: Update and Regenerate defconfig
- dts: msm8953-cpu: Update CPU efficiencies
- kernel: Kconfig.hz: add interface for setting 50Hz tick rate
- Makefile: Enable graphite flags for GCC only
- kernel: Add GCC Graphite Optimisation support
- drivers: touchscreen: daisy-sakura: GT917d: Drop testing code
- drivers: touchscreen: daisy-sakura: FT5446: Drop testing code
- cpufreq: schedutil: Expose default rate-limits as config options
- sched: Enable TTWU_QUEUE as a sched feature.
- kernel: fair: turn off EAS_USE_NEED_IDLE sched feature
- sched: Do not always skip yielding tasks
- sched: Do not reduce perceived CPU capacity while idle
- sched: Disable ARCH_POWER
- sched: Enable NEXT_BUDDY for better cache locality
- sched/fair: Disable LB_BIAS by default
- sched/features: Disabled Gentle Fair Sleepers
- BACKPORT: sched/core: Distribute tasks within affinity masks
- sched: Tweak the task migration logic for better multi-tasking workload
- sched/fair: Improve spreading of utilization
- [BACKPORT]sched/nohz: Optimize get_nohz_timer_target()
- [BACKPORT]sched/fair: Optimize select_idle_core()
- sched/fair: enforce EAS mode
- cpufreq: stats: replace the global lock with atomic
- cpufreq: governor: Be friendly towards latency-sensitive bursty workloads
- cpufreq: cpu-boost: don't boost if input_boost_ms is <= 0
- sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
- sched: fair: Misc changes
- sched: walt: Optimize cpu_util() and cpu_util_cum()
- sched: walt: Optimize task_util()
- sched/walt: Avoid taking rq lock for every IRQ update
- sched/walt: don't account CPU idle exit time to task demand
- Revert "sched: Use initial_task_util load for new tasks"
- sched: walt: Refactor WALT initialization routine
- sched: do not allocate window cpu arrays separately
- sched: walt: Correct WALT window size initialization
- sched: walt: Fix the bug in initializing the new task demand
- BACKPORT: ANDROID: sched/fair: Cleanup cpu_util{_wake}()
- sched/core: Fix build paravirt build on arm and arm64
- sched/core: Remove unnecessary #include headers
- sched: fair: Nuke tracing
- sched/fair: replace cfs_rq->rb_leftmost
- sched: fastpath for prev_cpu
- fair: change bias_to_prev_cpu heuristic
- sched: restrict iowait boost to tasks with prefer_idle
- sched: Call init_sched_energy_costs() before sched_energy_probe()
- sched: set min & max capacity CPUs in sched_energy_probe
- sched: do not return error when set the same sched_boost value
- sched: cpufreq: stop ignoring util updates
- cpufreq: schedutil: Don't set next_freq to UINT_MAX
- cpufreq: schedutil: Avoid using invalid next_freq
- cpufreq: Return 0 from ->fast_switch() on errors
- cpufreq: schedutil: clear cached_raw_freq when invalidated
- cpufreq: schedutil: Fix iowait boost reset
- cpufreq: schedutil: Use unsigned int for iowait boost
- cpufreq: schedutil: Make iowait boost more energy efficient
- sched: Make iowait_boost optional in schedutil
- cpufreq: schedutil: Remove CAF predicted load functionality
- cpufreq: schedutil: Remove CAF hispeed logic
- techpack: asoc: max98927: Remove some logging
- treewide: Fix misleading-indentation warnings
- treewide: Fix various misleading-indentation warnings
- power: qpnp-smbcharger_d1a: Cleanup redundant thermal mitigation based on country
- kernel: wait: do accept() in LIFO order for cache efficiency
- smp: Avoid using two cache lines for struct call_single_data
- soc: qcom: watchdog_v2: Fix memory leaks when memory_dump_v2 isn't built
- arm64: topology: fix cpu power calculation
- arm64: Improve parking of stopped CPUs
- i2c: refactor i2c_master_{send_recv}
- slub: Optimized SLUB Memory Allocator
- lib/bsearch.c: micro-optimize pivot position calculation
- drivers: soc: Add support to toggle SMD channel functionality
- char: msm_smd_pkt: Reduce wakelock timeout
- char: msm_smd_pkt: Reduce lock contention
- fbdev: msm: Remove partial update region delays
- mdss: Validate cursor image size
- fbdev: msm: xlog: Reduce log spam
- ipv4/tcp: Force applications to use `TCP_NODELAY` to improve network latency
- workqueue: Replace pool->attach_mutex with global wq_pool_attach_mutex
- workqueue: allow toggling wq_power_efficient
- fs: Align file struct to 8 bytes
- msm: sde: Fix uninitialized variable usage
- msm: kgsl: Remove unneeded time profiling from ringbuffer submission
- msm: kgsl: Use lock-less list for page pools
- drivers: thermal: limits-dcvs: Always build driver
- staging: android: ashmem: Get rid of the big mutex lock
- EXT4 optimizations
- ANDROID: ARM64: smp: disable preempt in backtracing across all cores
- ashmem: Align slab caches to L1 cache line
- block: immediately dispatch big size request
- arm64: Remove useless UAO IPI and describe how this gets enabled
- arm64: relax assembly code alignment from 16 byte to 4 byte
- fs: eventpoll: prevent all wakeups by eventpoll
- perf: Restrict perf event sampling CPU time to 5%
- percpu_counter: scalability works
- UPSTREAM: loop: Set correct device size when using LOOP_CONFIGURE
- arm64: configs: sakura: Update defconfig
- misc: Remove VLA from uid_sys_stats.c
- crypto: skcipher - Add separate walker for AEAD decryption
- crypto: skcipher - Add skcipher walk interface
- crypto: skcipher - Get rid of crypto_spawn_skcipher2()
- crypto: skcipher - Get rid of crypto_grab_skcipher2()
- crypto: msm: update crypto APIs and avoid VLA
- techpack: avoid VLA
- crypto: qce - Remove VLA usage of skcipher
- msm: vidc: avoid VLA
- msm_smem: make temp_string_size a constant to avoid VLA
- ion: nuke dbg_str to avoid VLA
- Fix subtle macro variable shadowing in min_not_zero()
- kernel.h: Retain constant expression output for max()/min()
- linux/kernel.h: add/correct kernel-doc notation
- dm crypt: convert essiv from ahash to shash
- ppp: mppe: Remove VLA usage
- crypto: api - Introduce generic max blocksize and alignmask
- crypto: cbc - Remove VLA usage
- crypto: cbc - Export CBC implementation
- crypto: cbc - Convert to skcipher
- crypto: shash - Remove VLA usage in unaligned hashing
- crypto: hash - Remove VLA usage
- crypto: xcbc - Remove VLA usage
- crypto: api - laying defines and checks for statically allocated buffers
- crypto: null - Get rid of crypto_{get,put}_default_null_skcipher2()
- xfrm: remove VLA usage in __xfrm6_sort()
- netfilter: nfnetlink: Remove VLA usage
- rtnetlink: Fix null-ptr-deref in rtnl_newlink
- rtnetlink: Remove VLA usage
- pstore/ram: Do not use stack VLA for parity workspace
- rslib: Allocate decoder buffers to avoid VLAs
- rslib: Split rs control struct
- rslib: Simplify error path
- rslib: Remove GPL boilerplate
- rslib: Add SPDX identifiers
- rslib: Cleanup top level comments
- rslib: Cleanup whitespace damage
- rslib: Add GFP aware init function
- ntfs: decompress: remove VLA usage
- dm stripe: get rid of a Variable Length Array (VLA)
- gpio: Propagate errors from gpiod_set_array_value_complex()
- gpio: Remove VLA from gpiolib
- crypto: remove several VLAs
- crypto: pcbc - Convert to skcipher
- ALSA: pcm: Remove VLA usage
- crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
- crypto: picoxcell - Remove VLA usage of skcipher
- crypto: mxs-dcp - Remove VLA usage of skcipher
- crypto: sahara - Remove VLA usage of skcipher
- crypto: cryptd - Remove VLA usage of skcipher
- crypto: cryptd - Remove unused but set variable 'tfm'
- crypto: cryptd - Add support for skcipher
- crypto: null - Remove VLA usage of skcipher
- crypto: ccp - Remove VLA usage of skcipher
- rxrpc: Remove VLA usage of skcipher
- ppp: mppe: Remove VLA usage of skcipher
- libceph: Remove VLA usage of skcipher
- block: cryptoloop: Remove VLA usage of skcipher
- x86/fpu: Remove VLA usage of skcipher
- crypto: aesni - Convert to skcipher
- s390/crypto: Remove VLA usage of skcipher
- gss_krb5: Remove VLA usage of skcipher
- crypto: skcipher - Introduce crypto_sync_skcipher
- fdt: Update CRC check for rng-seed
- fdt: add support for rng-seed
- arm64: map FDT as RW for early_init_dt_scan()
- arm64: kexec_file: add rng-seed support
- ion: add kernel support to get buffer flags
- mm: page_alloc: fix errors
- mm: oom_kill: Disable sysctl_oom_dump_tasks
- locking/rtmutex: Flip unlikely() branch to likely() in __rt_mutex_slowlock()
- mm/page-writeback.c: place "not" inside of unlikely() statement in wb_domain_writeout_inc()
- mm/mmzone.c: swap likely to unlikely as code logic is different for next_zones_zonelist()
- mm: maintain randomization of page free lists
- mm: move buddy list manipulations into helpers
- mm: shuffle initial free memory to improve memory-side-cache utilization
- mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)
- mm, sl[ou]b: improve memory accounting
- mm: kmemleak: Don't die when memory allocation fails
- mm: Increase ratelimit pages value
- mm: wakeup kswapd for order-0 allocation
- mm, page_alloc: double zone's batchsize
- mm/compaction: Disable compaction of unevictable pages
- mm/free_pcppages_bulk: prefetch buddy while not holding lock
- mm/free_pcppages_bulk: do not hold lock when picking pages to free
- mm, swap: fix race between swap count continuation operations
- mm: compaction: Fix 100% CPU usage after task is killed
- msm: kgsl: Use a common sharedmem init function and Pad sparse objects when mapping them
- msm: kgsl: Use del_timer() where appropriate
- msm: kgsl: Implement a list of imported memory
- msm: kgsl: Pad sparse objects when mapping them
- msm: kgsl: Record the cacheability attribute of ion buffers
- msm: kgsl: Do not modify the userspace alignment request
- msm: kgsl: Force VA alignment and padding if required
- lockref: Limit number of cmpxchg loop retries
- mm: introduce NR_INDIRECTLY_RECLAIMABLE_BYTES
- mm: treat indirectly reclaimable memory as free in overcommit logic
- mm: treat indirectly reclaimable memory as available in MemAvailable
- dcache: account external names as indirectly reclaimable memory
- mm: do not reclaim excessively at high-order allocation
- mm: skip swap readahead when process is exiting