Compare commits

...

435 Commits

Author SHA1 Message Date
Andreas Auernhammer
404d2ebe3f set SSE headers in put-part response (#12008)
This commit fixes a bug in the put-part
implementation. The SSE headers should be
set as specified by AWS - See:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html

Now, the MinIO server should set SSE-C headers,
like `x-amz-server-side-encryption-customer-algorithm`.

Fixes #11991
2021-04-07 14:50:28 -07:00
Minio Trusted
46964eb764 Update yaml files to latest version RELEASE.2021-04-06T23-11-00Z 2021-04-06 23:35:33 +00:00
Poorna Krishnamoorthy
bfab990c33 Improve error message from SetRemoteTargetHandler (#11909) 2021-04-06 12:42:30 -07:00
Harshavardhana
94018588fe unmarshal both LegalHold and ObjectLockLegalHold XML types (#11921)
Because of silly AWS S3 behavior we to handle both types.

fixes #11920
2021-04-06 12:41:56 -07:00
Anis Elleuch
8b76ba8d5d crawling: Apply lifecycle then decide healing action (#11563)
It is inefficient to decide to heal an object before checking its
lifecycle for expiration or transition. This commit will just reverse
the order of action: evaluate lifecycle and heal only if asked and
lifecycle resulted a NoneAction.
2021-04-06 12:41:51 -07:00
Harshavardhana
7eb7f65e48 add policy conditions support for signatureVersion and authType (#11947)
https://docs.aws.amazon.com/AmazonS3/latest/API/bucket-policy-s3-sigv4-conditions.html

fixes #11944
2021-04-06 12:41:31 -07:00
Harshavardhana
c608c0688a fix: properly close leaking bandwidth monitor channel (#11967)
This PR fixes

- close leaking bandwidth report channel leakage
- remove the closer requirement for bandwidth monitor
  instead if Read() fails remember the error and return
  error for all subsequent reads.
- use locking for usage-cache.bin updates, with inline
  data we cannot afford to have concurrent writes to
  usage-cache.bin corrupting xl.meta
2021-04-06 12:40:42 -07:00
Aditya Manthramurthy
41a9d1d778 Fix S3Select SQL column reference handling (#11957)
This change fixes handling of these types of queries:

- Double quoted column names with special characters:
    SELECT "column.name" FROM s3object
- Double quoted column names with reserved keywords:
    SELECT "CAST" FROM s3object
- Table name as prefix for column names:
    SELECT S3Object."CAST" FROM s3object
2021-04-06 12:40:28 -07:00
Klaus Post
e21e80841e Fix data race when connecting disks (#11983)
Multiple disks from the same set would be writing concurrently.

```
WARNING: DATA RACE
Write at 0x00c002100ce0 by goroutine 166:
  github.com/minio/minio/cmd.(*erasureSets).connectDisks.func1()
      d:/minio/minio/cmd/erasure-sets.go:254 +0x82f

Previous write at 0x00c002100ce0 by goroutine 129:
  github.com/minio/minio/cmd.(*erasureSets).connectDisks.func1()
      d:/minio/minio/cmd/erasure-sets.go:254 +0x82f

Goroutine 166 (running) created at:
  github.com/minio/minio/cmd.(*erasureSets).connectDisks()
      d:/minio/minio/cmd/erasure-sets.go:210 +0x324
  github.com/minio/minio/cmd.(*erasureSets).monitorAndConnectEndpoints()
      d:/minio/minio/cmd/erasure-sets.go:288 +0x244

Goroutine 129 (finished) created at:
  github.com/minio/minio/cmd.(*erasureSets).connectDisks()
      d:/minio/minio/cmd/erasure-sets.go:210 +0x324
  github.com/minio/minio/cmd.(*erasureSets).monitorAndConnectEndpoints()
      d:/minio/minio/cmd/erasure-sets.go:288 +0x244
```
2021-04-06 12:39:59 -07:00
Klaus Post
98c792bbeb Fix disk info race (#11984)
Protect updated members in xlStorage.

```
WARNING: DATA RACE
Write at 0x00c004b4ee78 by goroutine 1491:
  github.com/minio/minio/cmd.(*xlStorage).GetDiskID()
      d:/minio/minio/cmd/xl-storage.go:590 +0x1078
  github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).checkDiskStale()
      d:/minio/minio/cmd/xl-storage-disk-id-check.go:195 +0x84
  github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol()
      d:/minio/minio/cmd/xl-storage-disk-id-check.go:284 +0x16a
  github.com/minio/minio/cmd.erasureObjects.getBucketInfo.func1()
      d:/minio/minio/cmd/erasure-bucket.go:100 +0x1a5
  github.com/minio/minio/pkg/sync/errgroup.(*Group).Go.func1()
      d:/minio/minio/pkg/sync/errgroup/errgroup.go:122 +0xd7

Previous read at 0x00c004b4ee78 by goroutine 1087:
  github.com/minio/minio/cmd.(*xlStorage).CheckFile.func1()
      d:/minio/minio/cmd/xl-storage.go:1699 +0x384
  github.com/minio/minio/cmd.(*xlStorage).CheckFile()
      d:/minio/minio/cmd/xl-storage.go:1726 +0x13c
  github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).CheckFile()
      d:/minio/minio/cmd/xl-storage-disk-id-check.go:446 +0x23b
  github.com/minio/minio/cmd.erasureObjects.parentDirIsObject.func1()
      d:/minio/minio/cmd/erasure-common.go:173 +0x194
  github.com/minio/minio/pkg/sync/errgroup.(*Group).Go.func1()
      d:/minio/minio/pkg/sync/errgroup/errgroup.go:122 +0xd7
```
2021-04-06 12:39:57 -07:00
Klaus Post
f687ba53bc Fix Access Key requests (#11979)
Fix accessing claims when auth error is unchecked.

Only replaced when unchecked and when clearly without side effects.

Fixes #11959
2021-04-06 11:03:55 -07:00
Harshavardhana
e3da59c923 fix possible crash in bucket bandwidth monitor (#11986) 2021-04-06 11:03:41 -07:00
Harshavardhana
781b9b051c fix: service accounts policy enforcement regression (#11910)
service accounts were not inheriting parent policies
anymore due to refactors in the PolicyDBGet() from
the latest release, fix this behavior properly.
2021-04-06 08:58:05 -07:00
Harshavardhana
438becfde8 fix: delete/delete marker replication versions consistent (#11932)
replication didn't work as expected when deletion of
delete markers was requested in DeleteMultipleObjects
API, this is due to incorrect lookup elements being
used to look for delete markers.
2021-04-06 08:57:36 -07:00
Harshavardhana
16ef338649 fix: notify parent user in notification events (#11934)
fixes #11885
2021-04-06 08:55:37 -07:00
Harshavardhana
3242847ec0 avoid network read errors crashing CreateFile call (#11939)
Thanks to @dvaldivia for reproducing this
2021-04-06 08:55:30 -07:00
Harshavardhana
cf87303094 do not call LocalStorageInfo on gateways (#11903)
fixes https://github.com/minio/mc/issues/3665
2021-03-25 15:26:22 -07:00
Harshavardhana
90d8ec6310 fix: reject duplicate keys in PostPolicyJSON document (#11902)
fixes #11894
2021-03-25 13:57:57 -07:00
Klaus Post
b383522743 fix error could not read /proc ion windows. (#11868)
Bonus: Prealloc reasonable sizes for metrics.
2021-03-25 12:58:43 -07:00
Andreas Auernhammer
6d42036dd4 highwayhash: update to latest version containing an arm64 fix (#11901)
This commit updates the highwayhash version to `v1.0.2`
that fixes a critical issue on arm64.
2021-03-25 11:44:58 -07:00
Aditya Manthramurthy
b4d8bcf644 Converge PolicyDBGet functions in IAM (#11891) 2021-03-25 00:38:15 -07:00
Harshavardhana
d7f32ad649 xl: avoid sending Delete() remote call for fully successful runs
an optimization to avoid extra syscalls in PutObject(),
adds up to our PutObject response times.
2021-03-24 17:32:12 -07:00
Aditya Manthramurthy
906d68c356 Fix LDAP policy application on user policy (#11887) 2021-03-24 12:29:25 -07:00
Klaus Post
749e9c5771 metrics: Add canceled requests (#11881)
Add metric for canceled requests
2021-03-24 10:25:27 -07:00
Harshavardhana
410e84d273 xl: add checks for minioTmpMetaBucket in CreateFile 2021-03-24 09:36:10 -07:00
Harshavardhana
75741dbf4a xl: remove cleanupDir instead use Delete() (#11880)
use a single call to remove directly at disk
instead of doing recursively at network layer.
2021-03-24 09:08:05 -07:00
Anis Elleuch
fad7b27f15 metrics: Change type of minio_s3_requests_waiting_total to gauge (#11884) 2021-03-24 09:06:37 -07:00
Harshavardhana
79564656eb xl: CreateFile shouldn't prematurely timeout (#11878)
For large objects taking more than '3 minutes' response
times in a single PUT operation can timeout prematurely
as 'ResponseHeader' timeout hits for 3 minutes. Avoid
this by keeping the connection active during CreateFile
phase.
2021-03-24 09:05:03 -07:00
Harshavardhana
21cfc4aa49 Revert "xl: CreateFile shouldn't prematurely timeout (#11854)"
This reverts commit 922c7b57f5.
2021-03-23 23:47:45 -07:00
Harshavardhana
e80239a661 simplify OS instrumentation remove functions for global variables 2021-03-23 22:32:44 -07:00
Ritesh H Shukla
6a2ed44095 fix: optionally enable tracing posix calls 2021-03-23 22:23:08 -07:00
Aditya Manthramurthy
8adfeb0d84 fix: AccountInfo API for LDAP users (#11874)
Also, ensure admin APIs auth additionally validates groups
2021-03-23 17:39:20 -07:00
Harshavardhana
d23485e571 fix: LDAP groups handling and group mapping (#11855)
comprehensively handle group mapping for LDAP
users across IAM sub-subsytem.
2021-03-23 15:15:51 -07:00
Harshavardhana
da70e6ddf6 avoid healObjects recursively healing at empty path (#11856)
baseDirFromPrefix(prefix) for object names without
parent directory incorrectly uses empty path, leading
to long listing at various paths that are not useful
for healing - avoid this listing completely if "baseDir"
returns empty simple use the "prefix" as is.

this improves startup performance significantly
2021-03-23 07:57:07 -07:00
Harshavardhana
922c7b57f5 xl: CreateFile shouldn't prematurely timeout (#11854)
For large objects taking more than '3 minutes' response
times in a single PUT operation can timeout prematurely
as 'ResponseHeader' timeout hits for 3 minutes. Avoid
this by keeping the connection active during CreateFile
phase.
2021-03-22 18:25:05 -07:00
Harshavardhana
726d80dbb7 fix: merge duplicate keys in post policy (#11843)
some SDKs might incorrectly send duplicate
entries for keys such as "conditions", Go
stdlib unmarshal for JSON does not support
duplicate keys - instead skips the first
duplicate and only preserves the last entry.

This can lead to issues where a policy JSON
while being valid might not properly apply
the required conditions, allowing situations
where POST policy JSON would end up allowing
uploads to unauthorized buckets and paths.

This PR fixes this properly.
2021-03-20 22:16:30 -07:00
Ritesh H Shukla
23b03dadb8 Add process uptime metric (#11844) 2021-03-20 21:23:27 -07:00
Andreas Auernhammer
7b3719c17b crypto: simplify Context encoding (#11812)
This commit adds a `MarshalText` implementation
to the `crypto.Context` type.
The `MarshalText` implementation replaces the
`WriteTo` and `AppendTo` implementation.

It is slightly slower than the `AppendTo` implementation
```
goos: darwin
goarch: arm64
pkg: github.com/minio/minio/cmd/crypto
BenchmarkContext_AppendTo/0-elems-8         	381475698	         2.892 ns/op	       0 B/op	       0 allocs/op
BenchmarkContext_AppendTo/1-elems-8         	17945088	        67.54 ns/op	       0 B/op	       0 allocs/op
BenchmarkContext_AppendTo/3-elems-8         	 5431770	       221.2 ns/op	      72 B/op	       2 allocs/op
BenchmarkContext_AppendTo/4-elems-8         	 3430684	       346.7 ns/op	      88 B/op	       2 allocs/op
```
vs.
```
BenchmarkContext/0-elems-8         	135819834	         8.658 ns/op	       2 B/op	       1 allocs/op
BenchmarkContext/1-elems-8         	13326243	        89.20 ns/op	     128 B/op	       1 allocs/op
BenchmarkContext/3-elems-8         	 4935301	       243.1 ns/op	     200 B/op	       3 allocs/op
BenchmarkContext/4-elems-8         	 2792142	       428.2 ns/op	     504 B/op	       4 allocs/op
goos: darwin
```

However, the `AppendTo` benchmark used a pre-allocated buffer. While
this improves its performance it does not match the actual usage of
`crypto.Context` which is passed to a `KMS` and always encoded into
a newly allocated buffer.

Therefore, this change seems acceptable since it should not impact the
actual performance but reduces the overall code for Context marshaling.
2021-03-20 02:48:48 -07:00
Harshavardhana
9a6487319a remove MINIO_IO_DEADLINE support (#11841)
this feature in actual deployment was found
to be not that useful, remove support for this
for now.
2021-03-20 02:47:04 -07:00
Aditya Manthramurthy
94ff624242 Fix querying LDAP group/user policy (#11840) 2021-03-20 02:37:52 -07:00
Anis Elleuch
98ff91b484 xl: Reduce usage of isDirEmpty() (#11838)
When an object is removed, its parent directory is inspected to check if
it is empty to remove if that is the case.

However, we can use os.Remove() directly since it is only able to remove
a file or an empty directory.
2021-03-19 15:42:01 -07:00
Anis Elleuch
4d86384dc7 xl: Remove non needed check for empty dir (#11835)
RenameData renames xl.meta and data dir and removes the parent directory
if empty, however, there is a duplicate check for empty dir, since the
parent dir of xl.meta is always the same as the data-dir.
2021-03-19 12:26:53 -07:00
mailsmail
27eb4ae3bc fix: sql cast function when converting to float (#11817) 2021-03-19 09:14:38 -07:00
Ritesh H Shukla
b5dcaaccb4 Introduce metrics caching for performant metrics (#11831) 2021-03-19 00:04:29 -07:00
Anis Elleuch
0843280dc3 lifecycle: Support old BucketLifecycleConfiguration tag (#11828)
Some old AWS SDKs send BucketLifecycleConfiguration as the root tag in
the bucket lifecycle document. This PR will support both
LifecycleConfiguration and BucketLifecycleConfiguration.
2021-03-18 22:18:35 -07:00
Harshavardhana
61a1ea60c2 add missing java headless jdk in mint 2021-03-18 20:38:50 -07:00
Harshavardhana
b92a220db1 fix: handle weird drives sporadic read O_DIRECT behavior (#11832)
on freshReads if drive returns errInvalidArgument, we
should simply turn-off DirectIO and read normally, there
are situations in k8s like environments where the drives
behave sporadically in a single deployment and may not
have been implemented properly to handle O_DIRECT for
reads.
2021-03-18 20:16:50 -07:00
Shireesh Anjal
be5910b87e fix: bucket / object count and size returned as 0 (#11825) 2021-03-18 14:40:21 -07:00
Harshavardhana
51a8619a79 [feat] Add configurable deadline for writers (#11822)
This PR adds deadlines per Write() calls, such
that slow drives are timed-out appropriately and
the overall responsiveness for Writes() is always
up to a predefined threshold providing applications
sustained latency even if one of the drives is slow
to respond.
2021-03-18 14:09:55 -07:00
iternity-dotcom
d46c3c07a8 Add main_test.go to run system tests with coverage (#11783)
-  Build and RUN test executable: 
```
$ go test -tags testrunmain -covermode count
	-coverpkg="./..." -c -tags testrunmain
$ APP_ARGS="server /tmp/test" ./minio.test
	-test.run "^TestRunMain$"
	-test.coverprofile coverage.cov
```

- Or run the system under test just by calling go test
```
$ APP_ARGS="server /tmp/test" go test
	-cover
	-tags testrunmain
	-coverpkg="./..."
	-covermode count
	-coverprofile=coverage.cov
```

- Run System-Tests (when using GitBash prefix this 
  line with MSYS_NO_PATHCONV=1) Note the 
  SERVER_ENDPOINT must be reachable from 
  inside the docker container (so don't use localhost!)
```
$ docker run
	-e MINT_MODE=full
	-e SERVER_ENDPOINT=192.168.47.11:9000
	-e ACCESS_KEY=minioadmin
	-e SECRET_KEY=minioadmin
	-v /tmp/mint/log:/mint/log
	minio/mint
``

- Stop system under test  by sending SIGTERM
```
$ ctrl+c
```

- Transform coverage file to HTML
```
$ go tool cover -html=./coverage.cov -o coverage.html
```
2021-03-18 13:14:44 -07:00
Anis Elleuch
14d89eaae4 mrf: Enhance behavior for better results (#11788)
MRF was starting to heal when it receives a disk connection event, which
is not good when a node having multiple disks reconnects to the cluster.

Besides, MRF needs Remove healing option to remove stale files.
2021-03-18 11:19:02 -07:00
ebozduman
32b088a2ff No retries if minio server is down/connection refused err (#11809) 2021-03-18 11:05:48 -07:00
Harshavardhana
eed3b66d98 dsync: use refresh timer properly to avoid leaks (#11820)
timer pattern should always involve a 'Stop()/Reset()' otherwise
 `time.NewTimer(duration).C` will always leak.
2021-03-17 16:37:13 -07:00
Harshavardhana
add3cd4e44 allow configuring delete cleanup interval from default 10minutes (#11818) 2021-03-17 15:15:58 -07:00
Harshavardhana
60b0f2324e storage write call path optimizations (#11805)
- write in o_dsync instead of o_direct for smaller
  objects to avoid unaligned double Write() situations
  that may arise for smaller objects < 128KiB
- avoid fallocate() as its not useful since we do not
  use Append() semantics anymore, fallocate is not useful
  for streaming I/O we can save on a syscall
- createFile() doesn't need to validate `bucket` name
  with a Lstat() call since createFile() is only used
  to write at `minioTmpBucket`
- use io.Copy() when writing unAligned writes to allow
  usage of ReadFrom() from *os.File providing zero
  buffer writes().
2021-03-17 09:38:38 -07:00
Anis Elleuch
0eb146e1b2 add additional metrics per disk API latency, API call counts #11250)
```
mc admin info --json
```

provides these details, for now, we shall eventually 
expose this at Prometheus level eventually. 

Co-authored-by: Harshavardhana <harsha@minio.io>
2021-03-16 20:06:57 -07:00
Minio Trusted
b379ca3bb0 Update yaml files to latest version RELEASE.2021-03-17T02-33-02Z 2021-03-17 02:56:28 +00:00
Andreas Auernhammer
e197800f90 s3v4: read and verify S3 signature v4 chunks separately (#11801)
This commit fixes a security issue in the signature v4 chunked
reader. Before, the reader returned unverified data to the caller
and would only verify the chunk signature once it has encountered
the end of the chunk payload.

Now, the chunk reader reads the entire chunk into an in-memory buffer,
verifies the signature and then returns data to the caller.

In general, this is a common security problem. We verifying data
streams, the verifier MUST NOT return data to the upper layers / its
callers as long as it has not verified the current data chunk / data
segment:
```
func (r *Reader) Read(buffer []byte) {
   if err := r.readNext(r.internalBuffer); err != nil {
      return err
   }
   if err := r.verify(r.internalBuffer); err != nil {
      return err
   }
   copy(buffer, r.internalBuffer)
}
```
2021-03-16 13:33:40 -07:00
Ravind Kumar
980311fdfd Fix STANDARD defaults, point to new docs site. (#11800) 2021-03-16 12:04:28 -07:00
Klaus Post
771dea175c erasure pools enable faster checks for file not found (#11799)
For operations that require the object to exist make it possible to 
detect if the file isn't found in *any* pool.

This will allow these to return the error early without having to re-check.
2021-03-16 11:02:20 -07:00
Anis Elleuch
fa94682e83 policy: Add Merge API (#11793)
This commit adds a new API in pkg/bucket/policy package called
Merge to merge multiple policies of a user or a group into one
policy document.
2021-03-16 08:50:36 -07:00
Harshavardhana
6160188bf3 fix: erasure index based reading based on actual ParityBlocks (#11792)
in some setups with ordering issues in drive configuration,
we should rely on expected parityBlocks instead of `len(disks)/2`
2021-03-15 20:03:13 -07:00
Klaus Post
e5a1a2a974 s3 select: fix date_diff behavior (#11786)
Fixes #11785 and adds tests for samples given.
2021-03-15 14:15:52 -07:00
Steve Wills
642ba3f2d6 fix: runtime issue on FreeBSD due to missing O_NOATIME/O_DSYNC support (#11790)
See also:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=253937
2021-03-15 14:02:36 -07:00
Harshavardhana
4d80de899a fix: mips 32bit compilation issue (#11775)
fixes #11768
2021-03-15 06:02:09 -07:00
Harshavardhana
afbd3e41eb add missing principalId in web notifications (#11777)
fixes #11561
2021-03-13 10:52:43 -08:00
Poorna Krishnamoorthy
5e003549cc Replication: Enforce DeleteMarker disable setting (#11720)
This PR also enforces DeleteReplication
disable setting
2021-03-13 10:28:35 -08:00
Nitish Tiwari
7fa3e4106b Add consoleAdmin as a default canned policy (#11770) 2021-03-12 12:51:43 -08:00
Philip Brown
75db500e85 cmd/os-readdir_other.go - return nil with err (#11772) 2021-03-12 07:22:25 -08:00
Harshavardhana
910097bbbc update browser assets for react-qr-code 2021-03-11 16:58:15 -08:00
Minio Trusted
afd346417d Update yaml files to latest version RELEASE.2021-03-12T00-00-47Z 2021-03-12 00:23:57 +00:00
Harshavardhana
feafccf007 handle trimming '/' if present in the object names (#11765)
- MultipleDeletes should handle '/' prefix for objectnames
- Trimming the slash alone is enough for ListObjects()
  prefix and markers

fixes #11769
2021-03-11 13:57:03 -08:00
S Santhosh Nagaraj
9b54fcdf12 feat: Add QR Code to Share Object Modal (#11735)
Co-authored-by: Kaan Kabalak <kaankabalak@gmail.com>
2021-03-11 11:21:45 -08:00
Anis Elleuch
f92b7a5621 Browser: Shared link has content-disposition header (#11712)
The shared link will be automatically downloadable when the user opens
the shared link in a browser.
2021-03-10 23:02:16 -08:00
Poorna Krishnamoorthy
c25e75f0b5 Fix redact LDAP password properly (#11762)
fixes #11742 previous pull request #11750 fixed only the web trace
2021-03-10 11:05:38 -08:00
Harshavardhana
3ffe520643 add release build-arg to docker multiarch builds (#11752) 2021-03-10 09:41:44 -08:00
Klaus Post
952b0f111d Update S2 compression (#11753)
Relevant updates:

* Less allocations on decode: https://github.com/klauspost/compress/pull/322
* Fixed rare out-of-bounds write on amd64.
* ARM64 decompression assembly. Around 2x output speed. https://github.com/klauspost/compress/pull/324
* Speed up decompression on non-assembly platforms. https://github.com/klauspost/compress/pull/328

Upgrade cpuid to match simdjson.
2021-03-10 09:41:29 -08:00
Harshavardhana
777344a594 add release build-arg to docker multiarch builds (#11754)
additional paths to ignore for healing
2021-03-10 09:38:35 -08:00
Minio Trusted
9d118b372e Update yaml files to latest version RELEASE.2021-03-10T05-11-33Z 2021-03-10 05:34:48 +00:00
Poorna Krishnamoorthy
878bc6c72b Redact LDAP password if any in request trace (#11750)
Fixes: #11742
2021-03-09 14:43:16 -08:00
Klaus Post
fdc2f69218 truncate xl.meta files upon rewrites #11749)
If the destination files exist and is larger - junk data will be left at the end of the file.
2021-03-09 14:42:24 -08:00
Anis Elleuch
0d124095ea lc: Return expiration header only when version id is unspecified (#11718)
Follow S3 specification to return Expiration header in HEAD/GET call
only when version-id is not passed in the request.
2021-03-09 13:19:08 -08:00
Harshavardhana
691035832a fix: normalize object layer inputs (#11534)
Cases where we have applications making request
for `//` in object names make sure that all
are normalized to `/` and all such requests that
are prefixed '/' are removed. To ensure a
consistent view from all operations.
2021-03-09 12:58:22 -08:00
Anis Elleuch
eac66e67ec Use maximum parity for config files (#11740)
Some deployments have low parity (EC:2), but we really do not need to
save our config data with the same parity configuration.

N/2 would be better to keep MinIO configurations intact when unexpected
a number of drives fail.
2021-03-09 10:19:47 -08:00
Anis Elleuch
57f3ed22d4 erasure: Reduce the interval of cleaning up .trash folder (#11741)
Reduce from 30 to 10 minutes.
2021-03-09 09:45:38 -08:00
Poorna Krishnamoorthy
2f29719e6b resize replication worker pool dynamically after config update (#11737) 2021-03-09 02:56:42 -08:00
Andreas Auernhammer
209fe61dcc vault: disable Hashicorp Vault with opt-in (#11711)
This commit disables the Hashicorp Vault
support but provides a way to temp. enable
it via the `MINIO_KMS_VAULT_DEPRECATION=off`

Vault support has been deprecated long ago
and this commit just requires users to take
action if they maintain a Vault integration.
2021-03-09 00:02:35 -08:00
Harshavardhana
8ecffdb7a7 Revert "Revert "heal: Heal bucket metadata when a fresh disk is inserted (#11734)""
This reverts commit 806df164b2.
2021-03-08 16:12:17 -08:00
Harshavardhana
806df164b2 Revert "heal: Heal bucket metadata when a fresh disk is inserted (#11734)"
This reverts commit 64662a49ff.
2021-03-08 14:43:24 -08:00
Klaus Post
4ac9ed4248 CopyObject: Do not remove crypto info when compressed (#11702)
Removing crypto info makes it impossible to copy encrypted+compressed objects.

Disable destination compression when encrypted.
2021-03-08 12:57:54 -08:00
Harshavardhana
3d8c512bba update browser package.json 2021-03-08 11:35:37 -08:00
Klaus Post
3ff5f55dcb Fetch fileinfo concurrently (#11700)
For non-erasure setups fetch up to 10 fileinfos concurrently.

Fixes #11625
2021-03-08 11:30:43 -08:00
Max Xu
097e5eba9f feat: remove go-bindata-assetfs in favor of embed by upgrading to go1.16 (#11733) 2021-03-08 11:26:43 -08:00
Andreas Auernhammer
ba6930bb13 fips: always enable AES in FIPS mode when using madmin (#11732)
This commit adds FIPS-specifc build tags to the madmin
package. When madmin is compiled with `--tags "fips"`
it will always use AES-GCM for encryption - not just
when an optimized AES implementation is available.
2021-03-08 10:58:02 -08:00
Anis Elleuch
64662a49ff heal: Heal bucket metadata when a fresh disk is inserted (#11734)
Replacing disk with a fresh one never heals bucket metadata (policy,
notification, etc..). This commit fixes the issue.
2021-03-08 10:54:13 -08:00
Harshavardhana
78e867e145 ignore healing .trash, .metacache amd .multipart paths (#11725) 2021-03-07 09:38:31 -08:00
Harshavardhana
9ccc483df6 [feat]: change erasure coding default block size from 10MiB to 1MiB (#11721)
major performance improvements in range GETs to avoid large
read amplification when ranges are tiny and random

```
-------------------
Operation: GET
Operations: 142014 -> 339421
Duration: 4m50s -> 4m56s
* Average: +139.41% (+1177.3 MiB/s) throughput, +139.11% (+658.4) obj/s
* Fastest: +125.24% (+1207.4 MiB/s) throughput, +132.32% (+612.9) obj/s
* 50% Median: +139.06% (+1175.7 MiB/s) throughput, +133.46% (+660.9) obj/s
* Slowest: +203.40% (+1267.9 MiB/s) throughput, +198.59% (+753.5) obj/s
```

TTFB from 10MiB BlockSize
```
* First Access TTFB: Avg: 81ms, Median: 61ms, Best: 20ms, Worst: 2.056s
```

TTFB from 1MiB BlockSize
```
* First Access TTFB: Avg: 22ms, Median: 21ms, Best: 8ms, Worst: 91ms
```

Full object reads however do see a slight change which won't be
noticeable in real world, so not doing any comparisons

TTFB still had improvements with full object reads with 1MiB

```
* First Access TTFB: Avg: 68ms, Median: 35ms, Best: 11ms, Worst: 1.16s
```

v/s

TTFB with 10MiB
```
* First Access TTFB: Avg: 388ms, Median: 98ms, Best: 20ms, Worst: 4.156s
```

This change should affect all new uploads, previous uploads should
continue to work with business as usual. But dramatic improvements can
be seen with these changes.
2021-03-06 14:09:34 -08:00
Anis Elleuch
abce040088 fix: Remove repetitive IAM ready message (#11723)
"IAM initialization complete" is printed each 5 minutes, avoid this by
printing it only during the first initialization of IAM.
2021-03-06 09:27:46 -08:00
Anis Elleuch
558762bdf6 iam: Return a slice of policies for a group (#11722)
A group can have multiple policies, a user subscribed to readwrite &
diagnostics can perform S3 operations & admin operations as well.
However, the current code only returns one policy for one group.
2021-03-06 09:27:06 -08:00
Harshavardhana
d971061305 use listPathRaw for HealObjects() instead of expensive WalkVersions() (#11675) 2021-03-06 09:25:48 -08:00
Andreas Auernhammer
509bcc01ad fips: do not use SHA-3 when building a FIPS-140 2 binary (#11710)
This commit disables SHA-3 for OpenID when building a
FIPS-140 2 compatible binary. While SHA-3 is a
crypto. hash function accepted by NIST there is no
FIPS-140 2 compliant implementation available when
using the boringcrypto Go branch.

Therefore, SHA-3 must not be used when building
a FIPS-140 2 binary.
2021-03-05 20:43:42 -08:00
Anis Elleuch
7ea95fcec8 Add mint versioning tests (#11500)
Co-authored-by: Harshavardhana <harsha@minio.io>
2021-03-05 19:15:42 -08:00
Krishnan Parthasarathi
79b0d056a2 lifecycle: don't transition delete markers (#11692) 2021-03-05 15:29:27 -08:00
Harshavardhana
ec547c0fa8 enable race detector CI for macos-latest (#11715) 2021-03-05 14:16:23 -08:00
Krishnan Parthasarathi
bcf9825082 Data usage should account for transitioned objects (#11717) 2021-03-05 14:15:53 -08:00
Harshavardhana
651487507a fix: Merge() should merge and return a copy (#11714)
fixes #11713
2021-03-05 09:42:46 -08:00
sgandon
124816f6a6 fix : IAM Intialization failing with a large number of users/policies (#11701) 2021-03-05 08:36:16 -08:00
Klaus Post
fa9cf1251b Imporve healing and reporting (#11312)
* Provide information on *actively* healing, buckets healed/queued, objects healed/failed.
* Add concurrent healing of multiple sets (typically on startup).
* Add bucket level resume, so restarts will only heal non-healed buckets.
* Print summary after healing a disk is done.
2021-03-04 14:36:23 -08:00
Harshavardhana
97e7a902d0 fix: shellcheck mint shellscript 2021-03-04 14:28:13 -08:00
Harshavardhana
d73d756a80 fix: incorrect errors thrown by lint (#11699)
fixes #11698
2021-03-04 14:27:38 -08:00
Aditya Manthramurthy
7488c77e7c Test LDAP connection configuration at startup (#11684) 2021-03-04 12:17:36 -08:00
Harshavardhana
786585009e fix: capture disks when entire peer is offline (#11697)
currently when one of the peer is down, the
drives from that peer are reported as '0/0'
offline instead we should capture/filter the
drives from the peer and populate it appropriately
such that `mc admin info` displays correct info.
2021-03-04 10:07:05 -08:00
Anis Elleuch
7be7109471 locking: Add Refresh for better locking cleanup (#11535)
Co-authored-by: Anis Elleuch <anis@min.io>
Co-authored-by: Harshavardhana <harsha@minio.io>
2021-03-03 18:36:43 -08:00
Minio Trusted
464fa08f2e Update yaml files to latest version RELEASE.2021-03-04T00-53-13Z 2021-03-04 01:15:49 +00:00
Klaus Post
c3217bd6eb Use actual size for buffer selection (#11687)
For compressed inputs, this will be -1, but the object may be small.
2021-03-03 16:28:10 -08:00
Andreas Auernhammer
f14cc6c943 etag: add FromContentMD5 to parse content-md5 as ETag (#11688)
This commit adds the `FromContentMD5` function to
parse a client-provided content-md5 as ETag.

Further, it also adds multipart ETag computation
for future needs.
2021-03-03 12:58:28 -08:00
Harshavardhana
2c198ae7b6 fix: prometheus metrics disks_online count when disks are down (#11689)
prometheus metrics was using total disks instead
of online disk count, when disks were down, this
PR fixes this and also adds a new metric for
total_disk_count
2021-03-03 11:18:41 -08:00
Poorna Krishnamoorthy
690434514d Avoid notification event for replicas (#11683)
Creating notification events for replica creation
is not particularly useful to send as the notification
event generated at source already includes replication
completion events.

For applications using replica cluster as failover, avoiding
duplicate notifications for replica event will allow seamless
failover.
2021-03-03 11:13:31 -08:00
Harshavardhana
039f59b552 fix: missing user policy enforcement in PostPolicyHandler (#11682) 2021-03-03 08:47:08 -08:00
Harshavardhana
c6a120df0e fix: Prometheus metrics to re-use storage disks (#11647)
also re-use storage disks for all `mc admin server info`
calls as well, implement a new LocalStorageInfo() API
call at ObjectLayer to lookup local disks storageInfo

also fixes bugs where there were double calls to StorageInfo()
2021-03-02 17:28:04 -08:00
Klaus Post
cd9e30c0f4 IAM: Block while loading users (#11671)
While starting up a request that needs all IAM data will start another load operation if the first on startup hasn't finished. This slows down both operations.

Block these requests until initial load has completed.

Blocking calls will be ListPolicies, ListUsers, ListServiceAccounts, ListGroups - and the calls that eventually trigger these. These will wait for the initial load to complete.

Fixes issue seen in #11305
2021-03-02 17:08:25 -08:00
Harshavardhana
f96d4cf7d3 fix: do not deny admins to change other passwords
fixes a regression from #11680
2021-03-02 17:02:32 -08:00
Harshavardhana
879599b0cf fix: enforce deny if present for implicit permissions (#11680)
Implicit permissions for any user is to be allowed to
change their own password, we need to restrict this
further even if there is an implicit allow for this
scenario - we have to honor Deny statements if they
are specified.
2021-03-02 15:35:50 -08:00
Harshavardhana
b1bb3f7016 [feat]: implement GetBucketPolicyStatus API (#11673)
additionally also add more APIs in notImplemented
list, adjust routing rules appropriately
2021-03-01 23:10:33 -08:00
Anis Elleuch
e8d8dfa3ae Add metric for internode RPC calls errors (#11669) 2021-03-01 12:31:33 -08:00
Nitish Tiwari
bbd1244a88 Add support for mTLS for Audit log target (#11645) 2021-03-01 09:19:13 -08:00
Klaus Post
10bdb78699 fix: listObjectVersions Include object in marker (#11562)
ListObjectVersions would skip past the object in the marker when version id is specified. 
Make `listPath` return the object with the marker and truncate it if not needed.

Avoid having to parse unintended objects to find a version marker.
2021-03-01 08:12:02 -08:00
Shireesh Anjal
289b22d911 fix: pool number not added for one server (#11670)
The previous code was iterating over replies from peers and assigning
pool numbers to them, thus missing to add it for the local server.

Fixed by iterating over the server properties of all the servers
including the local one.
2021-03-01 08:09:43 -08:00
Harshavardhana
0b9c17443e update gopsutil to use the v3 API (#11638) 2021-03-01 00:15:46 -08:00
Bala FA
23f7ab40b3 Add PoolNumber field to madmin.ServerProperties (#11327) 2021-02-28 21:26:28 -08:00
Minio Trusted
e3f8830ab7 Update yaml files to latest version RELEASE.2021-03-01T04-20-55Z 2021-03-01 04:43:28 +00:00
Harshavardhana
2f4af09c01 fix: alow changes to readAllData to decrement activeCount() 2021-02-28 20:09:23 -08:00
Harshavardhana
37960cbc2f fix: avoid writing more content on network with O_DIRECT reads (#11659)
There was an io.LimitReader was missing for the 'length'
parameter for ranged requests, that would cause client to
get truncated responses and errors.

fixes #11651
2021-02-28 15:33:03 -08:00
cbows
c67d1bf120 add unauthenticated lookup-bind mode to LDAP identity (#11655)
Closes #11646
2021-02-28 12:57:31 -08:00
Klaus Post
c5b3a675fa Block profiling tweaks (#11612)
The base profiles contains no valuable data, don't record them.

Reduce block rate by 2 orders of magnitude, should still capture just as valuable data with less CPU strain.
2021-02-27 09:22:14 -08:00
Harshavardhana
b690304eed use faster way for siphash (#11640) 2021-02-26 16:53:06 -08:00
Harshavardhana
9171d6ef65 rename all references from crawl -> scanner (#11621) 2021-02-26 15:11:42 -08:00
Harshavardhana
6386b45c08 [feat] use rename instead of recursive deletes (#11641)
most of the delete calls today spend time in
a blocking operation where multiple calls need
to be recursively sent to delete the objects,
instead we can use rename operation to atomically
move the objects from the namespace to `tmp/.trash`

we can schedule deletion of objects at this
location once in 15, 30mins and we can also add
wait times between each delete operation.

this allows us to make delete's faster as well
less chattier on the drives, each server runs locally
a groutine which would clean this up regularly.
2021-02-26 09:52:27 -08:00
Andreas Auernhammer
1f659204a2 remove GetObject from ObjectLayer interface (#11635)
This commit removes the `GetObject` method
from the `ObjectLayer` interface.

The `GetObject` method is not longer used by
the HTTP handlers implementing the high-level
S3 semantics. Instead, they use the `GetObjectNInfo`
method which returns both, an object handle as well
as the object metadata.

Therefore, it is no longer necessary that a concrete
`ObjectLayer` implements `GetObject`.
2021-02-26 09:52:02 -08:00
Harshavardhana
f9f6fd0421 fix: service account permissions generated from LDAP user (#11637)
service accounts generated from LDAP parent user
did not inherit correct permissions, this PR fixes
this fully.
2021-02-25 13:49:59 -08:00
Klaus Post
85620dfe93 use bucket in path in distribution hash (#11634)
Use bucket in erasure distribution hash.

For the rare cases where objects with the same names are uploaded to many buckets.
2021-02-25 10:11:31 -08:00
Harshavardhana
a8e4f64ff3 Revert "fix: remove persistence layer for metacache store in memory (#11538)"
This reverts commit b23659927c.
2021-02-24 22:24:51 -08:00
Krishnan Parthasarathi
ca5c6e3160 fix: translate empty versionID string to null version where appropriate (#11629)
We store the null version as empty string. We should translate it to null
version for bucket with version suspended too.
2021-02-24 18:39:10 -08:00
Harshavardhana
b23659927c fix: remove persistence layer for metacache store in memory (#11538)
store the cache in-memory instead of disks to avoid large
write amplifications for list heavy workloads, store in
memory instead and let it auto expire.
2021-02-24 15:51:41 -08:00
Minio Trusted
b912e9ab41 Update yaml files to latest version RELEASE.2021-02-24T18-44-45Z 2021-02-24 19:08:36 +00:00
Andreas Auernhammer
c1a49be639 use crypto/sha256 for FIPS 140-2 compliance (#11623)
This commit replaces the usage of
github.com/minio/sha256-simd with crypto/sha256
of the standard library in all non-performance
critical paths.

This is necessary for FIPS 140-2 compliance which
requires that all crypto. primitives are implemented
by a FIPS-validated module.

Go can use the Google FIPS module. The boringcrypto
branch of the Go standard library uses the BoringSSL
FIPS module to implement crypto. primitives like AES
or SHA256.

We only keep github.com/minio/sha256-simd when computing
the content-SHA256 of an object. Therefore, this commit
relies on a build tag `fips`.

When MinIO is compiled without the `fips` flag it will
use github.com/minio/sha256-simd. When MinIO is compiled
with the fips flag (go build --tags "fips") then MinIO
uses crypto/sha256 to compute the content-SHA256.
2021-02-24 09:00:15 -08:00
Klaus Post
03172b89e2 Ensure cache has finished deserializing (#11620)
Make sure that response has been fully deserialized before returning.
2021-02-24 02:59:49 -08:00
Harshavardhana
b517c791e9 [feat]: use DSYNC for xl.meta writes and NOATIME for reads (#11615)
Instead of using O_SYNC, we are better off using O_DSYNC
instead since we are only ever interested in data to be
persisted to disk not the associated filesystem metadata.

For reads we ask customers to turn off noatime, but instead
we can proactively use O_NOATIME flag to avoid atime updates
upon reads.
2021-02-24 00:14:16 -08:00
Petr Tichý
14aef52004 remove Content-MD5 on Range requests (#11611)
This removes the Content-MD5 response header on Range requests in Azure
Gateway mode. The partial content MD5 doesn't match the full object MD5
in metadata.
2021-02-23 19:32:56 -08:00
Harshavardhana
67b20125e4 fix: do not ignore CREDITS file 2021-02-23 12:34:59 -08:00
Andreas Auernhammer
d4b822d697 pkg/etag: add new package for S3 ETag handling (#11577)
This commit adds a new package `etag` for dealing
with S3 ETags.

Even though ETag is often viewed as MD5 checksum of
an object, handling S3 ETags correctly is a surprisingly
complex task. While it is true that the ETag corresponds
to the MD5 for the most basic S3 API operations, there are
many exceptions in case of multipart uploads or encryption.

In worse, some S3 clients expect very specific behavior when
it comes to ETags. For example, some clients expect that the
ETag is a double-quoted string and fail otherwise.
Non-AWS compliant ETag handling has been a source of many bugs
in the past.

Therefore, this commit adds a dedicated `etag` package that provides
functionality for parsing, generating and converting S3 ETags.
Further, this commit removes the ETag computation from the `hash`
package. Instead, the `hash` package (i.e. `hash.Reader`) should
focus only on computing and verifying the content-sha256.

One core feature of this commit is to provide a mechanism to
communicate a computed ETag from a low-level `io.Reader` to
a high-level `io.Reader`.

This problem occurs when an S3 server receives a request and
has to compute the ETag of the content. However, the server
may also wrap the initial body with several other `io.Reader`,
e.g. when encrypting or compressing the content:
```
   reader := Encrypt(Compress(ETag(content)))
```
In such a case, the ETag should be accessible by the high-level
`io.Reader`.

The `etag` provides a mechanism to wrap `io.Reader` implementations
such that the `ETag` can be accessed by a type-check.
This technique is applied to the PUT, COPY and Upload handlers.
2021-02-23 12:31:53 -08:00
Minio Trusted
1b63291ee2 Update yaml files to latest version RELEASE.2021-02-23T20-05-01Z 2021-02-23 20:28:30 +00:00
Harshavardhana
aa7244a9a4 fix: make sure to convert the error properly in HealBucket() (#11610)
server startup code expects the object layer to properly
convert error into a proper type, so that in situations when
servers are coming up and quorum is not available servers
wait on each other.
2021-02-23 09:23:11 -08:00
Harshavardhana
2a79ea0332 isServerResolvable its sufficient to check server is reachable (#11609)
using isServerResolvable for expiration can lead to chicken
and egg problems, a lock might expire knowingly when server
is booting up causing perpetual locks getting expired.
2021-02-22 16:29:53 -08:00
Ritesh H Shukla
6e5c61d917 Skip printing error if empty for reporting bandwidth (#11606) 2021-02-22 13:41:40 -08:00
Aditya Manthramurthy
02e7de6367 LDAP config: fix substitution variables (#11586)
- In username search filter and username format variables we support %s for
replacing with the username.

- In group search filter we support %s for username and %d for the full DN of
the username.
2021-02-22 13:20:36 -08:00
Harshavardhana
cec12f4c76 update sha256-simd to v1.0.0 (#11607) 2021-02-22 13:19:53 -08:00
Harshavardhana
da676ac298 remove network calls for getLocalDisks (#11603) 2021-02-22 13:19:44 -08:00
Harshavardhana
18ec933085 fix: for containers use root-disk detection cleverly (#11593)
root-disk implemented currently had issues where root
disk partitions getting modified might race and provide
incorrect results, to avoid this lets rely again back on
DeviceID and match it instead.

In-case of containers `/data` is one such extra entity that
needs to be verified for root disk, due to how 'overlay'
filesystem works and the 'overlay' presents a completely
different 'device' id - using `/data` as another entity
for fallback helps because our containers describe 'VOLUME'
parameter that allows containers to automatically have a
virtual `/data` that points to the container root path this
can either be at `/` or `/var/lib/` (on different partition)
2021-02-22 10:32:21 -08:00
Harshavardhana
c31d2c3fdc fix: CrawlAndGetDataUsage close pipe() before using a new one (#11600)
also additionally make sure errors during deserializer closes
the reader with right error type such that Write() end
actually see the final error, this avoids a waitGroup usage
and waiting.
2021-02-22 10:04:32 -08:00
Harshavardhana
8778828a03 fix: read metadata in O_DIRECT if configured and supported (#11594)
reduce the page-cache pressure completely by moving
the entire read-phase of our operations to O_DIRECT,
primarily this is going to be very useful for chatty
metadata operations such as listing, scanner, ilm, healing
like operations to avoid filling up the page-cache upon
repeated runs.
2021-02-22 01:36:17 -08:00
Sarasa Kisaragi
48b212dd8e Fix HDFS wrong filepath if subpath provided (#11574) 2021-02-20 15:32:18 -08:00
Harshavardhana
be7de911c4 fix: update minio-go to fix an issue with S3 gateway (#11591)
since we have changed our default envs to MINIO_ROOT_USER,
MINIO_ROOT_PASSWORD this was not supported by minio-go
credentials package, update minio-go to v7.0.10 for this
support. This also addresses few bugs related to users
had to specify AWS_ACCESS_KEY_ID as well to authenticate
with their S3 backend if they only used MINIO_ROOT_USER.
2021-02-20 11:10:21 -08:00
Harshavardhana
8cad407e0b fix: Bring support for symlink on regular files on NAS (#11383)
fixes #11203
2021-02-20 00:30:12 -08:00
Poorna Krishnamoorthy
85d2187c20 fix: ETag mismatch for large upload in replica (#11587) 2021-02-20 00:22:17 -08:00
Anis Elleuch
98d3f94996 metrics: Add the number of requests in the waiting queue (#11580)
We can use this metric to check if there are too many S3 clients in the
queue and could explain why some of those S3 clients are timing out.

```
minio_s3_requests_waiting_total{server="127.0.0.1:9000"} 9981
```

If max_requests is 10000 then there is a strong possibility that clients
are timing out because of the queue deadline.
2021-02-20 00:21:55 -08:00
mailsmail
173284903b fix incorrect http range in SelectObjectContentHandler (#11585) 2021-02-19 17:55:28 -08:00
WangYuMu
c70240b893 fix incorrect values in sizing guide (#11583) 2021-02-19 10:05:04 -08:00
Harshavardhana
8ba2136e06 Update yaml files to latest version RELEASE.2021-02-19T04-38-02Z 2021-02-18 21:02:25 -08:00
Poorna Krishnamoorthy
2dce5d9442 fix: delete marker permanent delete replication (#11581) 2021-02-18 16:35:37 -08:00
Anis Elleuch
f28b063091 heal: Use healDeleteDangling global const in self healing (#11579)
A small fix, use healDeleteDangling constant instead of 'true' in the
self-healing code.
2021-02-18 15:16:20 -08:00
Klaus Post
90abea5b7a check if kafka producer is connected (#11578)
fixes #11576
2021-02-18 11:14:27 -08:00
Klaus Post
c5b2a8441b fix: faster healing when disk is replaced. (#11520) 2021-02-18 11:06:54 -08:00
Harshavardhana
0f5ca83418 move CI to go1.15 and go1.16 (#11570) 2021-02-18 09:42:32 -08:00
Klaus Post
8a6b13c239 Avoid synchronizing usage writes (#11560)
If the periodic `case <-t.C:` save gets held up for a long time it will end up 
synchronize all disk writes for saving the caches.

We add jitter to per set writes so they don't sync up and don't hold a 
lock for the write, since it isn't needed anyway.

If an outage prevents writes for a long while we also add individual 
waits for each disk in case there was a queue.

Furthermore limit the number of buffers kept to 2GiB, since this could get 
huge in large clusters. This will not act as a hard limit but should be enough 
for normal operation.
2021-02-18 00:38:37 -08:00
Poorna Krishnamoorthy
8e8a792d9d Allow delete marker replication from replica (#11566)
in the case of active-active replication.

This PR also has the following changes:

- add docs on replication design
- fix corner case of completing versioned delete on a delete marker
  when the target is down and `mc rm --vid` is performed repeatedly. Instead
  the version should still be retained in the `PENDING|FAILED` state until
  replication sync completes.
- remove `s3:Replication:OperationCompletedReplication` and
   `s3:Replication:OperationFailedReplication` from ObjectCreated 
  events type
2021-02-18 00:33:51 -08:00
Harshavardhana
95e0acbb26 fix: allow accountInfo with creds with parentUsers (#11568) 2021-02-17 20:57:17 -08:00
Poorna Krishnamoorthy
55037e6e54 lifecycle:Fix args passed to determine expiry header (#11567) 2021-02-17 19:25:19 -08:00
Harshavardhana
289e1d8b2a fix: reduce crawler memory usage by orders of magnitude (#11556)
currently crawler waits for an entire readdir call to
return until it processes usage, lifecycle, replication
and healing - instead we should pass the applicator all
the way down to avoid building any special stack for all
the contents in a single directory.

This allows for

- no need to remember the entire list of entries per directory
  before applying the required functions
- no need to wait for entire readdir() call to finish before
  applying the required functions
2021-02-17 15:34:42 -08:00
Anis Elleuch
e07918abe3 lifecycle: Fix expiration header in some cases (#11565)
Expiration header was not correctly computed in case of non current
versions.

The behavior is fixed in this commit.
2021-02-17 14:51:29 -08:00
Harshavardhana
ffea6fcf09 fix: rename crawler as scanner in config (#11549) 2021-02-17 12:04:11 -08:00
Klaus Post
11b2220696 Don't autoheal if disks are healing (#11558)
Don't spawn automatic healing ops if a disk is healing.
2021-02-17 10:18:12 -08:00
Harshavardhana
aa8450a2a1 fix: parallelize getPoolIdx() for object lookup (#11547) 2021-02-16 19:36:15 -08:00
Harshavardhana
87cce344f6 default to common conditions if conditions not present (#11546)
fixes #11544
2021-02-16 11:56:45 -08:00
Harshavardhana
7d4a2d2b68 fix: multiple pool reads parallelize when possible (#11537) 2021-02-16 02:43:47 -08:00
Minio Trusted
cfc8b92dff Update yaml files to latest version RELEASE.2021-02-14T04-01-33Z 2021-02-14 04:25:52 +00:00
Anis Elleuch
c4e12dc846 fix: in MultiDelete API return MalformedXML upon empty input (#11532)
To follow S3 spec
2021-02-13 09:48:25 -08:00
Harshavardhana
a94a9c37fa fix: support IAM policy handling for wildcard actions (#11530)
This PR fixes

- allow 's3:versionid` as a valid conditional for
  Get,Put,Tags,Object locking APIs
- allow additional headers missing for object APIs
- allow wildcard based action matching
2021-02-12 23:05:09 -08:00
Harshavardhana
79b6a43467 fix: avoid timed value for network calls (#11531)
additionally simply timedValue to have RWMutex
to avoid concurrent calls to DiskInfo() getting
serialized, this has an effect on all calls that
use GetDiskInfo() on the same disks.

Such as getOnlineDisks, getOnlineDisksWithoutHealing
2021-02-12 18:17:52 -08:00
Shireesh Anjal
928de04f7a fix: osinfos incomplete in case of warnings (#11505)
The function used for getting host information
(host.SensorsTemperaturesWithContext) returns warnings in some cases.

Returning with error in such cases means we miss out on the other useful
information already fetched (os info).

If the OS info has been succesfully fetched, it should always be
included in the output irrespective of whether the other data (CPU
sensors, users) could be fetched or not.
2021-02-12 17:57:57 -08:00
Poorna Krishnamoorthy
93fd248b52 fix: save ModTime properly in disk cache (#11522)
fix #11414
2021-02-11 19:25:47 -08:00
Harshavardhana
2a7b123895 turn off http2 for TLS setups for now (#11523)
due to lots of issues with x/net/http2, as
well as the bundled h2_bundle.go in the go
runtime should be avoided for now.

https://github.com/golang/go/issues/23559
https://github.com/golang/go/issues/42534
https://github.com/golang/go/issues/43989
https://github.com/golang/go/issues/33425
https://github.com/golang/go/issues/29246

With collection of such issues present, it
make sense to remove HTTP2 support for now
2021-02-11 15:53:04 -08:00
Harshavardhana
b3c56b53fb fix: metacache should only rename entries during cleanup (#11503)
To avoid large delays in metacache cleanup, use rename
instead of recursive delete calls, renames are cheaper
move the content to minioMetaTmpBucket and then cleanup
this folder once in 24hrs instead.

If the new cache can replace an existing one, we should
let it replace since that is currently being saved anyways,
this avoids pile up of 1000's of metacache entires for
same listing calls that are not necessary to be stored
on disk.
2021-02-11 10:22:03 -08:00
Minio Trusted
0ef3e359d8 Update yaml files to latest version RELEASE.2021-02-11T08-23-43Z 2021-02-11 08:47:10 +00:00
Poorna Krishnamoorthy
f24d8127ab fix: DeleteMultipleObjectsHandler to process deleted objects correctly (#11515)
DeleteMarkerVersionID which is returned by the lower layer should 
not be used in the key to lookup ObjectToDelete map
2021-02-10 23:41:41 -08:00
Harshavardhana
7875d472bc avoid notification for non-existent delete objects (#11514)
Skip notifications on objects that might have had
an error during deletion, this also avoids unnecessary
replication attempt on such objects.

Refactor some places to make sure that we have notified
the client before we

- notify
- schedule for replication
- lifecycle etc.
2021-02-10 22:00:42 -08:00
Harshavardhana
711adb9652 remove ipv6 fallbackdelay leave it as default 2021-02-10 17:35:09 -08:00
Poorna Krishnamoorthy
e6b4ea7618 More fixes for delete marker replication (#11504)
continuation of PR#11491 for multiple server pools and
bi-directional replication.

Moving proxying for GET/HEAD to handler level rather than
server pool layer as this was also causing incorrect proxying 
of HEAD.

Also fixing metadata update on CopyObject - minio-go was not passing
source version ID in X-Amz-Copy-Source header
2021-02-10 17:25:04 -08:00
Aditya Manthramurthy
466e95bb59 Return group DN instead of group name in LDAP STS (#11501)
- Additionally, check if the user or their groups has a policy attached during
the STS call.

- Remove the group name attribute configuration value.
2021-02-10 16:52:49 -08:00
Harshavardhana
881f98e511 fix: use getPoolIdx in DeleteObjects() (#11513)
filter out relevant objects for each pool to
avoid calling, further delete operations on
subsequent pools where some of these objects
might not exist.

This is mainly useful to avoid situations
during bi-directional bucket replication.
2021-02-10 14:25:43 -08:00
Harshavardhana
cbf4bb62e0 fix: getPoolIdx decouple from top level options (#11512)
top-level options shouldn't be passed down for
GetObjectInfo() while verifying the objects in
different pools, this is to make sure that
we always get the value from the pool where
the object exists.
2021-02-10 11:45:02 -08:00
Anis Elleuch
682482459d Change the default object content-type to binary/octet-stream (#11508) 2021-02-10 08:56:37 -08:00
Krishnan Parthasarathi
b87fae0049 Simplify PutObjReader for plain-text reader usage (#11470)
This change moves away from a unified constructor for plaintext and encrypted
usage. NewPutObjReader is simplified for the plain-text reader use. For
encrypted reader use, WithEncryption should be called on an initialized PutObjReader.

Plaintext:
func NewPutObjReader(rawReader *hash.Reader) *PutObjReader

The hash.Reader is used to provide payload size and md5sum to the downstream
consumers. This is different from the previous version in that there is no need
to pass nil values for unused parameters.

Encrypted:
func WithEncryption(encReader *hash.Reader,
key *crypto.ObjectKey) (*PutObjReader, error)

This method sets up encrypted reader along with the key to seal the md5sum
produced by the plain-text reader (already setup when NewPutObjReader was
called).

Usage:
```
  pReader := NewPutObjReader(rawReader)
  // ... other object handler code goes here

  // Prepare the encrypted hashed reader
  pReader, err = pReader.WithEncryption(encReader, objEncKey)

```
2021-02-10 08:52:50 -08:00
Anis Elleuch
b8b44c879f lifecycle: Remove a single delete marker with noncurrent expiry rule (#11444)
NoncurrentVersionExpiry can remove single delete markers according to
S3 spec:

```
The NoncurrentVersionExpiration action in the same Lifecycle
configuration removes noncurrent objects 30 days after they become
noncurrent. Thus, in this example, all object versions are permanently
removed 90 days after object creation. You will have expired object
delete markers, but Amazon S3 detects and removes the expired object
delete markers for you.
```
2021-02-10 08:51:34 -08:00
Harshavardhana
f53d1de87f fix: missing data on multiple columns reading parquet (#11499)
fixes #11413
2021-02-10 08:49:48 -08:00
Shireesh Anjal
5a18d437ce fix: drive hw info incomplete when smartinfo fails (#11509)
Collection of SMART information doesn't work in certain scenarios e.g.
in a container based setup. In such cases, instead of returning an error
(without any data), we should only set the error on the smartinfo
struct, so that other important drive hw info like device, mountpoint,
etc is retained in the output.
2021-02-10 08:48:14 -08:00
Poorna Krishnamoorthy
93eb549a83 fix: duplicate delete marker attempts in bi-directional replication (#11491) 2021-02-09 15:11:43 -08:00
Harshavardhana
fe3c39b583 use the new errgroup API whereever applicable (#11466)
start using the new errgroup concurrency control
API introduced in #11457
2021-02-09 12:08:25 -08:00
Harshavardhana
84d400487f fix: accountInfo API to cater for federated setups (#11484)
when MinIO is deployed in a federated setup, use etcd 
based listing of buckets to provide appropriate filtering 
of buckets per user.
2021-02-09 09:53:07 -08:00
Shireesh Anjal
3afa499885 fix: empty buckets/objects nodes in new setup (#11493) 2021-02-09 09:52:38 -08:00
Shireesh Anjal
13d015cf93 Fix make target hotfix (#11492)
The code inside the `hotfix` taget is overriding the values set at the
beginning of the Makefile affecting other make targets as well.

For example, running `TAG=mytag make docker` also ends up tagging
the docker image as a hotfix instead of `mytag`.

Using the `eval` function inside the `hotfix` target fixes this.
2021-02-09 09:29:43 -08:00
Krishna Srinivas
876b79b8d8 read-health check endpoint returns success if cluster can serve read requests (#11310) 2021-02-09 01:00:44 -08:00
Ritesh H Shukla
3d74efa6b1 fux: copy object for encrypted objects (#11490) 2021-02-08 19:58:17 -08:00
Harshavardhana
68d299e719 fix: case-insensitive lookups for metadata (#11489)
continuation of #11487, with more changes
2021-02-08 18:12:28 -08:00
Poorna Krishnamoorthy
f9c5636c2d fix: lookup metdata case insensitively (#11487)
while setting replication options
2021-02-08 16:19:05 -08:00
Klaus Post
9b10118d34 Metacache add abs entry limit (#11483)
Add an absolute limit to the number of metacaches for a bucket.

Delete excess caches if they haven't been handed out in an hour.
2021-02-08 11:36:16 -08:00
Harshavardhana
0e3211f4ad fix: server upgrades should have more descriptive error messages (#11476)
during rolling upgrade, provide a more descriptive error
message and discourage rolling upgrade in such situations,
allowing users to take action.

additionally also rename `slashpath -> pathutil` to avoid
a slighly mis-pronounced usage of `path` package.
2021-02-08 10:15:12 -08:00
Harshavardhana
2e4d9124ad honor region specified for remote targets (#11480)
fixes #11472
2021-02-08 08:54:27 -08:00
Harshavardhana
6fef4c21b9 fix: align atomic variables for 32bit arch (#11475)
fixes #11474
2021-02-08 08:51:12 -08:00
Poorna Krishnamoorthy
8e1bbd989a replication:alloc UserDefined map before use (#11478) 2021-02-07 22:01:10 -08:00
Sarasa Kisaragi
152d7cd95b HDFS support keytab (#11473) 2021-02-07 17:29:47 -08:00
Harshavardhana
74080bf108 update CREDITS file 2021-02-07 00:37:12 -08:00
Minio Trusted
647a209c73 Update yaml files to latest version RELEASE.2021-02-07T01-31-02Z 2021-02-07 01:53:27 +00:00
Harshavardhana
0d057c777a remove restriction for multi pool distribution algo 2021-02-06 16:19:05 -08:00
Anis Elleuch
275f7a63e8 lc: Apply DeleteAction correctly to objects (#11471)
When lifecycle decides to Delete an object and not a version in a
versioned bucket, the code should create a delete marker and not
removing the scanned version.

This commit fixes the issue.
2021-02-06 16:10:33 -08:00
Shireesh Anjal
97fe57bba9 Remove Connections from SysProcess struct (#11373)
The connections info of the processes takes up a huge amount of space,
and is not important for adding any useful health checks. Removing it
will significantly reduce the size of the subnet health report.
2021-02-05 21:32:28 -08:00
Harshavardhana
88c1bb0720 fix: improper ticker usage in goroutines (#11468)
- lock maintenance loop was incorrectly sleeping
  as well as using ticker badly, leading to
  extra expiration routines getting triggered
  that could flood the network.

- multipart upload cleanup should be based on
  timer instead of ticker, to ensure that long
  running jobs don't get triggered twice.

- make sure to get right lockers for object name
2021-02-05 19:23:48 -08:00
Harshavardhana
1fdafaf72f fix: listing for directory object when delimiter is present (#11463)
When you have heirarchy of prefixes with directory objects
our current master would list directory objects as prefixes
when delimiter is present, this is inconsistent with AWS S3

```
aws s3api list-objects --endpoint-url http://localhost:9000 \
    --profile minio --bucket testbucket-v --prefix new/ --delimiter /
{
    "CommonPrefixes": [
        {
            "Prefix": "new/"
        },
        {
            "Prefix": "new/new/"
        }
    ]
}
```

Instead this PR fixes this to behave like AWS S3

```
aws s3api list-objects --endpoint-url http://localhost:9000 \
      --profile minio --bucket testbucket-v --prefix new/ --delimiter /
{
    "Contents": [
        {
            "Key": "new/",
            "LastModified": "2021-02-05T06:27:42.660Z",
            "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
            "Size": 0,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "",
                "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
            }
        }
    ],
    "CommonPrefixes": [
        {
            "Prefix": "new/new/"
        }
    ]
}
```
2021-02-05 16:24:40 -08:00
Ritesh H Shukla
5fe4bb6b36 Reduce redundant crawler logging (#11448) 2021-02-05 15:51:11 -08:00
Harshavardhana
99b733d44c fix: deletion of delete marker regression (#11465)
fixes #11440
fixes #11451
fixes #11454
2021-02-05 15:06:23 -08:00
Klaus Post
b4ac05523b Add parallel bucket healing during startup (#11457)
Replaces #11449

Does concurrent healing but limits concurrency to 50 buckets.

Aborts on first error.

`errgroup.Group` is extended to facilitate this in a generic way.
2021-02-05 13:04:26 -08:00
Anis Elleuch
c7eacba41c health-info: Add tags to errors (#11412)
We use multiple libraries in health info, but the returned error does
not indicate exactly what library call is failing, hence adding named
tags to returned errors whenever applicable.
2021-02-05 12:37:15 -08:00
Anis Elleuch
1887c25279 xl: Fix feeding NumVersions & SuccessorModTime to lifecycle (#11462)
After recent refactor where lifecycle started to rely on ObjectInfo to
make decisions, it turned out there are some issues calculating
Successor Modtime and NumVersions, hence the lifecycle is not working as
expected in a versioning bucket in some cases.

This commit fixes the behavior.
2021-02-05 11:59:08 -08:00
Harshavardhana
c9b0f595b9 support directory objects in listing in certain scenarios (#11452)
When a directory object is presented as a `prefix`
param our implementation tend to only list objects
present common to the `prefix` than the `prefix` itself,
to mimic AWS S3 like flat key behavior this PR ensures
that if `prefix` is directory object, it should be
automatically considered to be part of the eventual
listing result.

fixes #11370
2021-02-05 10:12:25 -08:00
Harshavardhana
8bb580abfc fix: use getObjectNInfo to avoid bytes.Buffer usage (#11428)
few places were still using legacy call GetObject()
which was mainly designed for client response writer,
use GetObjectNInfo() for internal calls instead.
2021-02-05 09:57:30 -08:00
Harshavardhana
af9cb5f5f2 remove deprecated StandardSCData 2021-02-05 01:34:23 -08:00
Harshavardhana
9497dfd804 docs: add deprecation notice for federation 2021-02-04 17:18:37 -08:00
Harshavardhana
da55a05587 fix aggressive expiration detection (#11446)
for some flaky networks this may be too fast of a value
choose a defensive value, and let this be addressed
properly in a new refactor of dsync with renewal logic.

Also enable faster fallback delay to cater for misconfigured
IPv6 servers

refer
 - https://golang.org/pkg/net/#Dialer
 - https://tools.ietf.org/html/rfc6555
2021-02-04 16:56:40 -08:00
Harshavardhana
3fc4d6f620 update dependenices for relevant projects (#11445)
- minio-go -> v7.0.8
- ldap/v3 -> v3.2.4
- reedsolomon -> v1.9.11
- sio-go -> v0.3.1
- msgp -> v1.1.5
- simdjson-go, md5-simd, highwayhash
2021-02-04 13:49:52 -08:00
Ritesh H Shukla
67a8f37df0 fix: disk usage capacity metric reporting (#11435) 2021-02-04 12:26:58 -08:00
Anis Elleuch
075c429021 lifcycle: Add more validation to the config (#11382) 2021-02-04 11:26:02 -08:00
ArthurMa
df0c678167 fix: ldap config parsing issue for UserDNSearchFilter (#11437) 2021-02-04 11:07:29 -08:00
Harshavardhana
f108873c48 fix: replication metadata comparsion and other fixes (#11410)
- using miniogo.ObjectInfo.UserMetadata is not correct
- using UserTags from Map->String() can change order
- ContentType comparison needs to be removed.
- Compare both lowercase and uppercase key names.
- do not silently error out constructing PutObjectOptions
  if tag parsing fails
- avoid notification for empty object info, failed operations
  should rely on valid objInfo for notification in all
  situations
- optimize copyObject implementation, also introduce a new 
  replication event
- clone ObjectInfo() before scheduling for replication
- add additional headers for comparison
- remove strings.EqualFold comparison avoid unexpected bugs
- fix pool based proxying with multiple pools
- compare only specific metadata

Co-authored-by: Poorna Krishnamoorthy <poornas@users.noreply.github.com>
2021-02-03 20:41:33 -08:00
Andreas Auernhammer
871b450dbd crypto: add support for decrypting SSE-KMS metadata (#11415)
This commit refactors the SSE implementation and add
S3-compatible SSE-KMS context handling.

SSE-KMS differs from SSE-S3 in two main aspects:
 1. The client can request a particular key and
    specify a KMS context as part of the request.
 2. The ETag of an SSE-KMS encrypted object is not
    the MD5 sum of the object content.

This commit only focuses on the 1st aspect.

A client can send an optional SSE context when using
SSE-KMS. This context is remembered by the S3 server
such that the client does not have to specify the
context again (during multipart PUT / GET / HEAD ...).
The crypto. context also includes the bucket/object
name to prevent renaming objects at the backend.

Now, AWS S3 behaves as following:
 - If the user does not provide a SSE-KMS context
   it does not store one - resp. does not include
   the SSE-KMS context header in the response (e.g. HEAD).
 - If the user specifies a SSE-KMS context without
   the bucket/object name then AWS stores the exact
   context the client provided but adds the bucket/object
   name internally. The response contains the KMS context
   without the bucket/object name.
 - If the user specifies a SSE-KMS context with
   the bucket/object name then AWS again stores the exact
   context provided by the client. The response contains
   the KMS context with the bucket/object name.

This commit implements this behavior w.r.t. SSE-KMS.
However, as of now, no such object can be created since
the server rejects SSE-KMS encryption requests.

This commit is one stepping stone for SSE-KMS support.

Co-authored-by: Harshavardhana <harsha@minio.io>
2021-02-03 15:19:08 -08:00
Harshavardhana
f71e192343 avoid listing an empty dir without __XLDIR__ (#11427)
```
minio server /tmp/disk{1...4}
mc mb myminio/testbucket/
mkdir -p /tmp/disk{1..4}/testbucket/test-prefix/
```

This would end up being listed in the current
master, this PR fixes this situation.

If a directory is a leaf dir we should it
being listed, since it cannot be deleted anymore
with DeleteObject, DeleteObjects() API calls
because we natively support directories now.

Avoid listing it and let healing purge this folder
eventually in the background.
2021-02-03 14:06:54 -08:00
Anis Elleuch
b3f81e75f6 xl: Make it clear when to create delete marker for a non existant object (#11423) 2021-02-03 10:33:43 -08:00
Klaus Post
a71e0483c9 Fix nil disks in getOnlineDisksWithHealing (#11419)
If a disk is skipped when nil it is still returned.
2021-02-02 17:04:37 -08:00
Bahram Aghaei
f2d49ec21a Update ldap.md: add a link to ldap.go (#11409) 2021-02-02 15:47:04 -08:00
Klaus Post
4a9d9c8585 Update colinmarc/hdfs (#11417)
Updates needed dependency as well.

Fixes #11416
2021-02-02 15:37:30 -08:00
Harshavardhana
c885777ac6 Add support for TCP_QUICKACK (#11369)
TCP_QUICKACK is a setting that allows TCP endpoints
to acknowledge the receipt of data instantly in situations
where they would normally wait to see if more data
would be arriving.

https://assets.extrahop.com/whitepapers/TCP-Optimization-Guide-by-ExtraHop.pdf
2021-02-02 09:44:18 -08:00
Poorna Krishnamoorthy
fe3aca70c3 Make number of replication workers configurable. (#11379)
MINIO_API_REPLICATION_WORKERS env.var and
`mc admin config set api` allow number of replication
workers to be configurable. Defaults to half the number
of cpus available.

Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
2021-02-02 16:45:06 +05:30
Ritesh H Shukla
c4848f9b4f Add process start time to cluster metrics. (#11405) 2021-02-01 23:02:18 -08:00
Andreas Auernhammer
838d4dafbd gateway: don't use encrypted ETags for If-Match (#11400)
This commit fixes a bug in the S3 gateway that causes
GET requests to fail when the object is encrypted by the
gateway itself.

The gateway was not able to GET the object since it always
specified a `If-Match` pre-condition checking that the object
ETag matches an expected ETag - even for encrypted ETags.

The problem is that an encrypted ETag will never match the ETag
computed by the backend causing the `If-Match` pre-condition
to fail.

This commit fixes this by not sending an `If-Match` header when
the ETag is encrypted. This is acceptable because:
  1. A gateway-encrypted object consists of two objects at the backend
     and there is no way to provide a concurrency-safe implementation
     of two consecutive S3 GETs in the deployment model of the S3
     gateway.
     Ref: S3 gateways are self-contained and isolated - and there may
          be multiple instances at the same time (no lock across
          instances).
  2. Even if the data object changes (concurrent PUT) while gateway
     A has download the metadata object (but not issued the GET to
     the data object => data race) then we don't return invalid data
     to the client since the decryption (of the currently uploaded data)
     will fail - given the metadata of the previous object.
2021-02-01 23:02:08 -08:00
swartz-k
8c663f93f7 fix: typo in chinese docs (#11401) 2021-02-01 18:42:58 -08:00
Minio Trusted
b4cb7edf85 Update yaml files to latest version RELEASE.2021-02-01T22-56-52Z 2021-02-01 23:28:23 +00:00
Anis Elleuch
e96fdcd5ec tagging: Add event notif for PUT object tagging (#11366)
An optimization to avoid double calling for during PutObject tagging
2021-02-01 13:52:51 -08:00
Anis Elleuch
6ef678663e xl: Create a delete-marker when no other version exists (#11362)
Currently, it is not possible to create a delete-marker when xl.meta
does not exist (no version is created for that object yet). This makes a
problem for replication and mc mirroring with versioning enabled.

This also follows S3 specification.
2021-02-01 13:23:50 -08:00
Harshavardhana
f737a027cf fix: regression introduced in federated listing buckets
regression was introduced in
6cd255d516 fix it properly.
2021-02-01 12:06:58 -08:00
Ravind Kumar
547efecd82 Reverting TOC due to gluegun incompatibility. Revert this commit once we migrate to new docs site (#11402) 2021-02-01 11:10:07 -08:00
Anis Elleuch
65aa2bc614 ilm: Remove object in HEAD/GET if having an applicable ILM rule (#11296)
Remove an object on the fly if there is a lifecycle rule with delete
expiry action for the corresponding object.
2021-02-01 09:52:11 -08:00
Daniel Jakots
de4421d6a3 fix: build on OpenBSD (#11384)
github.com/ncw/directio doesn't support OpenBSD, but OpenBSD has
syscall.Fsync. (It also has fdatasync:
https://man.openbsd.org/fdatasync
but apparently Golang can't call it).
2021-02-01 09:48:49 -08:00
swartz-k
6c3467e300 fix: docs typo in README_zh_CN (#11375) 2021-01-31 00:11:01 -08:00
Andreas Auernhammer
33554651e9 crypto: deprecate native Hashicorp Vault support (#11352)
This commit deprecates the native Hashicorp Vault
support and removes the legacy Vault documentation.

The native Hashicorp Vault documentation is marked as
outdated and deprecated for over a year now. We give
another 6 months before we start removing Hashicorp Vault
support and show a deprecation warning when a MinIO server
starts with a native Vault configuration.
2021-01-29 17:55:37 -08:00
Minio Trusted
451d9057f3 Update yaml files to latest version RELEASE.2021-01-30T00-20-58Z 2021-01-30 00:45:11 +00:00
Poorna Krishnamoorthy
c82aef0a56 fix ObjectInfo returned by CopyObject (#11377)
erasure CopyObject was returning old metadata
2021-01-29 14:49:18 -08:00
Harshavardhana
1e53bf2789 fix: allow expansion with newer constraints for older setups (#11372)
currently we had a restriction where older setups would
need to follow previous style of "stripe" count being same
expansion, we can relax that instead newer pools can be
expanded for older setups with newer constraints of
common parity ratio.
2021-01-29 11:40:55 -08:00
Harshavardhana
d48ff93ba3 add hotfix+commit-id+docker builds 2021-01-28 15:54:41 -08:00
Ritesh H Shukla
c8489a8f0c fix: log notification errors only once (#11350) 2021-01-28 13:40:31 -08:00
Klaus Post
2680772d4b Don't mark remotes online when shutting down (#11368)
Shutting down will mark remotes online when the shutdown has 
started since the context is canceled.

For example:

```
API: SYSTEM()
Time: 16:21:31 CET 01/28/2021
DeploymentID: 313b0065-c5a1-4aa3-9233-07223e77a730
Error: Storage resources are insufficient for the write operation .minio.sys/tmp/ced455c4-3d27-4bdd-95fc-b4707a179b8a/fd934ef3-8fc8-4330-abc1-f039fbbb9700/part.1 (cmd.InsufficientWriteQuorum)
       1: d:\minio\minio\cmd\data-usage.go:56:cmd.storeDataUsageInBackend()
Exiting on signal: INTERRUPT
Client http://127.0.0.1:9002/minio/lock/v5 online
Client http://127.0.0.1:9002/minio/storage/data/distxl/s2/d3/v24 online
Client http://127.0.0.1:9002/minio/storage/data/distxl/s2/d2/v24 online
Client http://127.0.0.1:9002/minio/storage/data/distxl/s2/d1/v24 online
Client http://127.0.0.1:9002/minio/peer/v12 online
Client http://127.0.0.1:9002/minio/storage/data/distxl/s2/d4/v24 online
```

Use a fresh context for health checks.
2021-01-28 13:38:12 -08:00
Harshavardhana
567f7bdd05 fix: verify overlapping domains when > 1 2021-01-28 13:08:53 -08:00
Harshavardhana
6cd255d516 fix: allow updated domain names in federation (#11365)
additionally also disallow overlapping domain names
2021-01-28 11:44:48 -08:00
Aditya Manthramurthy
e79829b5b3 Bind to lookup user after user auth to lookup ldap groups (#11357) 2021-01-27 17:31:21 -08:00
Poorna Krishnamoorthy
fd3f02637a fix: replication regression due to proxying requests (#11356)
In PR #11165 due to incorrect proxying for 2 
way replication even when the object was not 
yet replicated

Additionally, fix metadata comparisons when
deciding to do full replication vs metadata copy.

fixes #11340
2021-01-27 11:22:34 -08:00
Harshavardhana
e019f21bda fix: trigger heal if one of the parts are not found (#11358)
Previously we added heal trigger when bit-rot checks
failed, now extend that to support heal when parts
are not found either. This healing gets only triggered
if we can successfully decode the object i.e read
quorum is still satisfied for the object.
2021-01-27 10:21:14 -08:00
Anis Elleuch
e9ac7b0fb7 heal: Remove empty directories (#11354)
Since the introduction of __XLDIR__, an empty directory does not have a
meaning anymore in erasure mode. Make healing removes it wherever it
finds it.
2021-01-27 02:19:28 -08:00
Harshavardhana
1debd722b5 rename last remaining Zone->Pool 2021-01-26 20:47:42 -08:00
massintha azamoum
e7f6051f19 Send bucket name to peers when bucket notification is enabled (#11351) 2021-01-26 13:48:28 -08:00
Harshavardhana
6717295e18 fix: rename audit log docs and datastructure 2021-01-26 13:39:55 -08:00
Anis Elleuch
00cff1aac5 audit: per object send pool number, set number and servers per operation (#11233) 2021-01-26 13:21:51 -08:00
Harshavardhana
9722531817 fix: purge LDAP deprecated keys 2021-01-26 09:53:29 -08:00
Harshavardhana
5c6bfae4c7 fix: load credentials from etcd directly when possible (#11339)
under large deployments loading credentials might be
time consuming, while this is okay and we will not
respond quickly for `mc admin user list` like queries
but it is possible to support `mc admin user info`

just like how we handle authentication by fetching
the user directly from persistent store.

additionally support service accounts properly,
reloaded from etcd during watch() - this was missing

This PR is also half way remedy for #11305
2021-01-25 20:01:49 -08:00
Aditya Manthramurthy
5f51ef0b40 Add LDAP Lookup-Bind mode (#11318)
This change allows the MinIO server to be configured with a special (read-only)
LDAP account to perform user DN lookups.

The following configuration parameters are added (along with corresponding
environment variables) to LDAP identity configuration (under `identity_ldap`):

- lookup_bind_dn / MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN
- lookup_bind_password / MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD
- user_dn_search_base_dn / MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN
- user_dn_search_filter / MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER

This lookup-bind account is a service account that is used to lookup the user's
DN from their username provided in the STS API. When configured, searching for
the user DN is enabled and configuration of the base DN and filter for search is
required. In this "lookup-bind" mode, the username format is not checked and must
not be specified. This feature is to support Active Directory setups where the
DN cannot be simply derived from the username.

When the lookup-bind is not configured, the old behavior is enabled: the minio
server performs LDAP lookups as the LDAP user making the STS API request and the
username format is checked and configuring it is required.
2021-01-25 14:26:10 -08:00
Harshavardhana
7e266293e6 fix: notify bucket replication after replication/ilm (#11343) 2021-01-25 14:04:41 -08:00
Harshavardhana
eb6871ecd9 fix: LoginSTS should be an inline implementation (#11337)
STS tokens can be obtained by using local APIs
once the remote JWT token is presented, current
code was not validating the incoming token in the
first place and was incorrectly making a network
operation using that token.

For the most part this always works without issues,
but under adversarial scenarios it exposes client
to hand-craft a request that can reach internal
services without authentication.

This kind of proxying should be avoided before
validating the incoming token.
2021-01-25 10:15:03 -08:00
Harshavardhana
9cdd981ce7 fix: expire locks only on participating lockers (#11335)
additionally also add a new ForceUnlock API, to
allow forcibly unlocking locks if possible.
2021-01-25 10:01:27 -08:00
Anis Elleuch
bd8020aba8 heal: Decode object name in healing result (#11348)
The user can see __XLDIR__ prefix in mc admin heal when the command
heals an empty object with a trailing slash. This commit decodes the
name of the object before sending it back to the upper level.
2021-01-25 09:53:37 -08:00
Harshavardhana
09bc49bd51 fix: healBucket across sets should capture results properly (#11341)
healing `.minio.sys/config` returns incorrect quorum errors
across sets, healing of the buckets.
2021-01-25 09:45:09 -08:00
Harshavardhana
82f0471d1b honor maxWait heal config when maxIO hits (#11338) 2021-01-25 07:53:12 -08:00
Ritesh H Shukla
0bf2d84f96 update new metrics url docs (#11342) 2021-01-25 01:03:07 -08:00
Harshavardhana
6a95f412c9 avoid double CORS headers in federation (#11334)
CORS proxying adds double headers one
by the receiving server, one by proxied
server. Remove them before proxying
when 'Origin' header is found.
2021-01-23 18:27:23 -08:00
Ritesh H Shukla
7575c24037 Add open FD and FD limit to cluster metrics (#11328) 2021-01-22 18:30:16 -08:00
Harshavardhana
43f973c4cf fix: check for O_DIRECT support for reads and writes (#11331)
In-case user enables O_DIRECT for reads and backend does
not support it we shall proceed to turn it off instead
and print a warning. This validation avoids any unexpected
downtimes that users may incur.
2021-01-22 15:38:21 -08:00
Harshavardhana
1b453728a3 initialize forwarder after init() to avoid crashes (#11330)
DNSCache dialer is a global value initialized in
init(), whereas `go` keeps `var =` before `init()`
, also we don't need to keep proxy routers as
global entities - register the forwarder as
necessary to avoid crashes.
2021-01-22 15:37:41 -08:00
Harshavardhana
a6c146bd00 validate storage class across pools when setting config (#11320)
```
mc admin config set alias/ storage_class standard=EC:3
```

should only succeed if parity ratio is valid for all
server pools, if not we should fail proactively.

This PR also needs to bring other changes now that
we need to cater for variadic drive counts per pool.

Bonus fixes also various bugs reproduced with

- GetObjectWithPartNumber()
- CopyObjectPartWithOffsets()
- CopyObjectWithMetadata()
- PutObjectPart,PutObject with truncated streams
2021-01-22 12:09:24 -08:00
Harshavardhana
a35cbb3ff3 update healthcheck tests for new prometheus endpoint (#11333) 2021-01-22 11:04:52 -08:00
Harshavardhana
c080f04e66 fix: prometheus metrics link typo update to latest 2021-01-22 01:53:23 -08:00
Klaus Post
2167ba0111 Feed correct part number to sio (#11326)
When offsets were specified we relied on the first part number to be correct.

Recalculate based on offset.
2021-01-21 08:43:03 -08:00
Klaus Post
4e6d717f39 Compress profiling data (#11313)
Trace data can be rather large and compresses fine.

Compress profile data in zip files:

```
277.895.314 before.profiles.zip
152.800.318 after.profiles.zip
```
2021-01-20 15:49:53 -08:00
Poorna Krishnamoorthy
845e251fa9 fix: crash in notificationsys when peers online is 0 (#11307)
Check if the number of peers online > 0 before using peerClient
2021-01-20 13:13:05 -08:00
Harshavardhana
d1a8f0b786 fix possible crashes on deleteMarker replication (#11308)
Delete marker can have `metaSys` set to nil, that
can lead to crashes after the delete marker has
been healed.

Additionally also fix isObjectDangling check
for transitioned objects, that do not have parts
should be treated similar to Delete marker.
2021-01-20 13:12:12 -08:00
Klaus Post
dac19d7272 Clarify root disk error (#11314)
Make it clearer what the problem is and how to resolve it.
2021-01-20 13:11:42 -08:00
Harshavardhana
7624c8b9bb fix: honor storage class uniformity for multiple pools (#11309) 2021-01-20 01:41:18 -08:00
Klaus Post
19fb1086b2 select: Fix leak on compressed files (#11302)
Properly close gzip reader when done reading

fixes #11300
2021-01-19 17:51:46 -08:00
Harshavardhana
a5e23a40ff fix: allow delayed etcd updates to have fallbacks (#11151)
fixes #11149
2021-01-19 10:05:41 -08:00
Harshavardhana
1ad2b7b699 fix: add stricter validation for erasure server pools (#11299)
During expansion we need to validate if

- new deployment is expanded with newer constraints
- existing deployment is expanded with older constraints
- multiple server pools rejected if they have different
  deploymentID and distribution algo
2021-01-19 10:01:31 -08:00
Harshavardhana
b5049d541f fix: reduce an extra readdir() attempted on non-legacy setups (#11301)
to verify moving content and preserving legacy content,
we have way to detect the objects through readdir()
this path is not necessary for most common cases on
newer setups, avoid readdir() to save multiple system
calls.

also fix the CheckFile behavior for most common
use case i.e without legacy format.
2021-01-19 10:01:06 -08:00
Harshavardhana
e0055609bb fix: crawler to skip healing the drives in a set being healed (#11274)
If an erasure set had a drive replacement recently, we don't
need to attempt healing on another drive with in the same erasure
set - this would ensure we do not double heal the same content
and also prioritizes usage for such an erasure set to be calculated
sooner.
2021-01-19 02:40:52 -08:00
Klaus Post
e8ce348da1 crypto: Escape JSON text (#10794)
Escape the JSON keys+values from the context.

We do not add the HTML escapes, since that is an extra escape level not mandatory for JSON.
2021-01-19 01:39:04 -08:00
Harshavardhana
6bfa162342 fix go mod tidy, remove unexpected deps 2021-01-18 20:38:23 -08:00
Ritesh H Shukla
b4add82bb6 Updated Prometheus metrics (#11141)
* Add metrics for nodes online and offline
* Add cluster capacity metrics
* Introduce v2 metrics
2021-01-18 20:35:38 -08:00
Harshavardhana
3bda8f755c update gjson dependency 2021-01-18 20:16:18 -08:00
Harshavardhana
3ca6330661 fix: optimize parentDirIsObject by moving isObject to storage layer (#11291)
For objects with `N` prefix depth, this PR reduces `N` such network
operations by converting `CheckFile` into a single bulk operation.

Reduction in chattiness here would allow disks to be utilized more
cleanly, while maintaining the same functionality along with one
extra volume check stat() call is removed.

Update tests to test multiple sets scenario
2021-01-18 12:25:22 -08:00
Klaus Post
3d9000d5b5 Upgrade simdjson to v0.2.0 with 30-50% faster parsing (#11295) 2021-01-18 10:29:50 -08:00
Aditya Manthramurthy
3163a660aa Fix support for multiple LDAP user formats (#11276)
Fixes support for using multiple base DNs for user search in the LDAP directory
allowing users from different subtrees in the LDAP hierarchy to request
credentials.

- The username in the produced credentials is now the full DN of the LDAP user
to disambiguate users in different base DNs.
2021-01-17 21:54:32 -08:00
Harshavardhana
0dadfd1b3d fix: do not compute usage for not found lifecycle operations (#11288)
Currently we would proceed to apply incorrect lifecycle policies
for non-existent objects, this PR handles them appropriately.
2021-01-17 13:58:41 -08:00
Harshavardhana
98f76008c7 fix: bucket lifecycle again to remove Days parameter 2021-01-17 01:50:56 -08:00
Harshavardhana
8da0b7cf03 fix: lifecycle documentation for DeleteMarker 2021-01-17 01:37:25 -08:00
Harshavardhana
4315f93421 fix: make sure parentDirIsObject is used at set level (#11280)
parentDirIsObject is not using set level understanding
to check for parent objects, without this it can lead to
objects that can actually reside on a separate set as
objects and would conflict.
2021-01-17 01:11:48 -08:00
Harshavardhana
ddb5d7043a fix: standard storage class is allowed to be '0' 2021-01-16 17:32:25 -08:00
Harshavardhana
f903cae6ff Support variable server pools (#11256)
Current implementation requires server pools to have
same erasure stripe sizes, to facilitate same SLA
and expectations.

This PR allows server pools to be variadic, i.e they
do not have to be same erasure stripe sizes - instead
they should have SLA for parity ratio.

If the parity ratio cannot be guaranteed by the new
server pool, the deployment is rejected i.e server
pool expansion is not allowed.
2021-01-16 12:08:02 -08:00
Minio Trusted
40d59c1961 Update yaml files to latest version RELEASE.2021-01-16T02-19-44Z 2021-01-16 02:43:53 +00:00
Poorna Krishnamoorthy
7090bcc8e0 fix: doc links and delete replication permissions enforcement (#11285) 2021-01-15 15:22:55 -08:00
Harshavardhana
c222bde14b fix: use common logging implementation for DNSCache (#11284) 2021-01-15 14:04:56 -08:00
Harshavardhana
4e06a72632 fix broken URL to k8s operator 2021-01-15 11:55:36 -08:00
Ravind Kumar
16040bc544 Update README for cleaner navigation, improved quickstart experience (#11277) 2021-01-15 11:54:05 -08:00
Harshavardhana
cc2d887e0e fix: whitespace and formatting in replication docs 2021-01-14 22:58:53 -08:00
Poorna Krishnamoorthy
c1b4b24236 Update replication docs (#11279) 2021-01-15 10:22:57 +05:30
Poorna Krishnamoorthy
feaf8dfb9a Fix replication status reported on completion (#11273)
Fixes: #11272
2021-01-13 11:52:28 -08:00
Harshavardhana
628ef081d1 fix: preserve cache calculated previously while moving from v2 to v3 (#11269)
This ensures that all the prometheus monitoring and usage
trackers to avoid alerts configured, although we cannot
support v1 to v2 here - we can v2 to v3.
2021-01-13 09:58:08 -08:00
Harshavardhana
44dff36ff7 listing with prefix prefixed with '/' should be ignored (#11268)
fixes #11265
2021-01-13 09:44:11 -08:00
Poorna Krishnamoorthy
b97d53b29c fix remote target healthcheck (#11267) 2021-01-12 20:48:04 -08:00
Aditya Manthramurthy
00af9881b0 LDAP doc fix: remove repeated paragraph and add emphasis (#11266) 2021-01-12 15:44:31 -08:00
Kanagaraj M
e09196d626 browser: fix file uploads with '#' in the name (#11261)
When the file name or the prefix contains `#`,
whatever comes after is ingored.

This is fixed by encoding the object path using
encodeURIComponent.

Fixes #11112
2021-01-12 12:47:46 -08:00
Harshavardhana
1a5775e2e8 enable small and large file optimization (#11260)
- for large objects we found that 1MiB block for
  r/w respectively.
- for small objects we found that 128KiB block for
  r/w respectively.
2021-01-12 10:20:39 -08:00
Anis Elleuch
e2579b1f5a azure: Use default upload parameters to avoid consuming too much memory (#11251)
A lot of memory is consumed when uploading small files in parallel, use
the default upload parameters and add MINIO_AZURE_UPLOAD_CONCURRENCY for
users to tweak.
2021-01-11 22:48:09 -08:00
Poorna Krishnamoorthy
7824e19d20 Allow synchronous replication if enabled. (#11165)
Synchronous replication can be enabled by setting the --sync
flag while adding a remote replication target.

This PR also adds proxying on GET/HEAD to another node in a
active-active replication setup in the event of a 404 on the current node.
2021-01-11 22:36:51 -08:00
Harshavardhana
317305d5f9 fix: regression in adding new replication targets (#11257) 2021-01-11 09:08:42 -08:00
Harshavardhana
e4e117faab fix: enable xl.json to xl.meta only if legacy drive is found (#11255)
another optimization is renameLegacyMetadata() never needs
to validate bucket with os.Stat() again, leading to reduction
in one extra syscall.
2021-01-11 02:27:04 -08:00
George Tsatsis
e8176fe978 Use -new during OpenSSL certificate generation (#11199)
As per https://stackoverflow.com/a/3758443/8156177, OpenSSL expects a certificate via STDIN. 
`-new` will allow a new certificate to be generated instead.
2021-01-11 02:24:50 -08:00
Harshavardhana
828602d672 fix: nancy github URL has changed 2021-01-10 13:37:05 -08:00
Minio Trusted
d9224fbc65 Update yaml files to latest version RELEASE.2021-01-08T21-18-21Z 2021-01-08 21:37:35 +00:00
Klaus Post
51dad1d130 Fix missing GetObjectNInfo Closure (#11243)
Review for missing Close of returned value from `GetObjectNInfo`.

This was often obscured by the stuff that auto-unlocks when reaching EOF.
2021-01-08 10:12:26 -08:00
Harshavardhana
4593b146be fix: print errors only when metacache status has errors (#11248) 2021-01-08 16:52:19 +05:30
Harshavardhana
f21d650ed4 fix: readData in bulk call using messagepack byte wrappers (#11228)
This PR refactors the way we use buffers for O_DIRECT and
to re-use those buffers for messagepack reader writer.

After some extensive benchmarking found that not all objects
have this benefit, and only objects smaller than 64KiB see
this benefit overall.

Benefits are seen from almost all objects from

1KiB - 32KiB

Beyond this no objects see benefit with bulk call approach
as the latency of bytes sent over the wire v/s streaming
content directly from disk negate each other with no
remarkable benefits.

All other optimizations include reuse of msgp.Reader,
msgp.Writer using sync.Pool's for all internode calls.
2021-01-07 19:27:31 -08:00
Harshavardhana
a4f6705874 expire stale locks when owner is down (#11247)
fixes #11246
2021-01-07 19:16:18 -08:00
Poorna Krishnamoorthy
b35b537e3f Pass versionID to checkReplicateDelete in web handler (#11244) 2021-01-07 15:28:27 -08:00
Harshavardhana
5c52d5ffc7 fix: treat errVolumeNotFound as EOF error in listPathRaw (#11238) 2021-01-07 09:52:53 -08:00
Harshavardhana
f0808bb2e5 fix: getObject fd leaks in transition and replication code (#11237) 2021-01-06 16:13:10 -08:00
Harshavardhana
a6dee21092 initialize IAM store before Init() to avoid any crash (#11236) 2021-01-06 13:40:20 -08:00
Anis Elleuch
6f781c5e7a heal: Reduce whitespace ticker to 5 seconds (#11234)
30 seconds white spaces is long for some setups which time out when no
read activity in short time, reduce the subnet health white space ticker
to 5 seconds, since it has no cost at all.
2021-01-06 13:29:50 -08:00
Harshavardhana
f8ca859790 fix: server/gateway banner formatting (#11230) 2021-01-06 10:38:07 -08:00
Kanagaraj M
b78521cd69 update license verifier to use updated keys (#11197) 2021-01-06 10:17:05 -08:00
Harshavardhana
76e2713ffe fix: use buffers only when necessary for io.Copy() (#11229)
Use separate sync.Pool for writes/reads

Avoid passing buffers for io.CopyBuffer()
if the writer or reader implement io.WriteTo or io.ReadFrom
respectively then its useless for sync.Pool to allocate
buffers on its own since that will be completely ignored
by the io.CopyBuffer Go implementation.

Improve this wherever we see this to be optimal.

This allows us to be more efficient on memory usage.
```
   385  // copyBuffer is the actual implementation of Copy and CopyBuffer.
   386  // if buf is nil, one is allocated.
   387  func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error) {
   388  	// If the reader has a WriteTo method, use it to do the copy.
   389  	// Avoids an allocation and a copy.
   390  	if wt, ok := src.(WriterTo); ok {
   391  		return wt.WriteTo(dst)
   392  	}
   393  	// Similarly, if the writer has a ReadFrom method, use it to do the copy.
   394  	if rt, ok := dst.(ReaderFrom); ok {
   395  		return rt.ReadFrom(src)
   396  	}
```

From readahead package
```
// WriteTo writes data to w until there's no more data to write or when an error occurs.
// The return value n is the number of bytes written.
// Any error encountered during the write is also returned.
func (a *reader) WriteTo(w io.Writer) (n int64, err error) {
	if a.err != nil {
		return 0, a.err
	}
	n = 0
	for {
		err = a.fill()
		if err != nil {
			return n, err
		}
		n2, err := w.Write(a.cur.buffer())
		a.cur.inc(n2)
		n += int64(n2)
		if err != nil {
			return n, err
		}
```
2021-01-06 09:36:55 -08:00
Harshavardhana
b5d291ea88 fix: rename remaining zone -> pool (#11231) 2021-01-06 09:35:47 -08:00
Klaus Post
eb9172eecb Allow Compression + encryption (#11103) 2021-01-05 20:08:35 -08:00
Klaus Post
97a4c120e9 Add Optimization as a type of change (#11221)
* Add Optimization as a type of change. 
* Make the checkmark affirmative instead of disaffirming
2021-01-05 14:53:52 -08:00
Poorna Krishnamoorthy
64bddf47d8 Pass deletemarker correctly to replicate opts (#11227)
fixes: #11180
2021-01-05 14:12:37 -08:00
Harshavardhana
4ed45ce543 fix: healing buckets during pool expansion (#11224)
fixes #11209
2021-01-05 13:24:22 -08:00
Klaus Post
ad511b0eb8 tests: Fix occasional data race (#11223)
CI tests could trigger a data race.

Servers are generally not expected to reinitialize, so tests could trigger data races when reinitializing and async operations are running.

We add the option to safely reset global vars instead of overwriting.

Fixes races like:

```
WARNING: DATA RACE
Read at 0x00000477ab18 by goroutine 1159:
  github.com/minio/minio/cmd.FileInfo.ToObjectInfo()
      /home/runner/work/minio/minio/cmd/erasure-metadata.go:105 +0x16d
  github.com/minio/minio/cmd.erasureObjects.putObject()
      /home/runner/work/minio/minio/cmd/erasure-object.go:748 +0x13f8
  github.com/minio/minio/cmd.(*erasureObjects).listPath.func3.2()
      /home/runner/work/minio/minio/cmd/metacache-set.go:682 +0x7d3
  github.com/minio/minio/cmd.newMetacacheBlockWriter.func1.2()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:777 +0x1c4
  github.com/minio/minio/cmd.newMetacacheBlockWriter.func1()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:806 +0x614

Previous write at 0x00000477ab18 by goroutine 1269:
  [failed to restore the stack]

Goroutine 1159 (running) created at:
  github.com/minio/minio/cmd.newMetacacheBlockWriter()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:760 +0x112
  github.com/minio/minio/cmd.(*erasureObjects).listPath.func3()
      /home/runner/work/minio/minio/cmd/metacache-set.go:672 +0xe22

Goroutine 1269 (running) created at:
  testing.(*T).Run()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1095 +0x537
  testing.runTests.func1()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1339 +0xa6
  testing.tRunner()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1050 +0x1eb
  testing.runTests()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1337 +0x594
  testing.(*M).Run()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1252 +0x2ff
  github.com/minio/minio/cmd.TestMain()
      /home/runner/work/minio/minio/cmd/test-utils_test.go:120 +0x44e
  main.main()
      _testmain.go:1408 +0x223
==================
==================
WARNING: DATA RACE
Read at 0x00000477aae8 by goroutine 1159:
  github.com/minio/minio/cmd.(*BucketVersioningSys).Enabled()
      /home/runner/work/minio/minio/cmd/bucket-versioning.go:26 +0x52
  github.com/minio/minio/cmd.FileInfo.ToObjectInfo()
      /home/runner/work/minio/minio/cmd/erasure-metadata.go:105 +0x197
  github.com/minio/minio/cmd.erasureObjects.putObject()
      /home/runner/work/minio/minio/cmd/erasure-object.go:748 +0x13f8
  github.com/minio/minio/cmd.(*erasureObjects).listPath.func3.2()
      /home/runner/work/minio/minio/cmd/metacache-set.go:682 +0x7d3
  github.com/minio/minio/cmd.newMetacacheBlockWriter.func1.2()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:777 +0x1c4
  github.com/minio/minio/cmd.newMetacacheBlockWriter.func1()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:806 +0x614

Previous write at 0x00000477aae8 by goroutine 1269:
  [failed to restore the stack]

Goroutine 1159 (running) created at:
  github.com/minio/minio/cmd.newMetacacheBlockWriter()
      /home/runner/work/minio/minio/cmd/metacache-stream.go:760 +0x112
  github.com/minio/minio/cmd.(*erasureObjects).listPath.func3()
      /home/runner/work/minio/minio/cmd/metacache-set.go:672 +0xe22

Goroutine 1269 (running) created at:
  testing.(*T).Run()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1095 +0x537
  testing.runTests.func1()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1339 +0xa6
  testing.tRunner()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1050 +0x1eb
  testing.runTests()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1337 +0x594
  testing.(*M).Run()
      /opt/hostedtoolcache/go/1.14.13/x64/src/testing/testing.go:1252 +0x2ff
  github.com/minio/minio/cmd.TestMain()
      /home/runner/work/minio/minio/cmd/test-utils_test.go:120 +0x44e
  main.main()
      _testmain.go:1408 +0x223
==================
```
2021-01-05 10:45:26 -08:00
Harshavardhana
cb0eaeaad8 feat: migrate to ROOT_USER/PASSWORD from ACCESS/SECRET_KEY (#11185) 2021-01-05 10:22:57 -08:00
Minio Trusted
f3f0041ad0 Update yaml files to latest version RELEASE.2021-01-05T05-22-38Z 2021-01-05 05:42:54 +00:00
Harshavardhana
d0027c3c41 do not use large buffers if not necessary (#11220)
without this change, there is a performance
regression for small objects GETs, this makes
the overall speed to go back to pre '59d363'
commit days.
2021-01-04 18:51:52 -08:00
Anis Elleuch
cb7fc99368 handlers: Avoid initializing a struct in each handler call (#11217) 2021-01-04 09:54:22 -08:00
Harshavardhana
a4383051d9 remove/deprecate crawler disable environment (#11214)
with changes present to automatically throttle crawler
at runtime, there is no need to have an environment
value to disable crawling. crawling is a fundamental
piece for healing, lifecycle and many other features
there is no good reason anyone would need to disable
this on a production system.

* Apply suggestions from code review
2021-01-04 09:43:31 -08:00
Harshavardhana
e7ae49f9c9 fix: calculate prometheus disks_offline/disks_total correctly (#11215)
fixes #11196
2021-01-04 09:42:09 -08:00
Anis Elleuch
153d4be032 tracing: NumSubscribers() to use atomic instead of mutex (#11219)
globalSubscribers.NumSubscribers() is heavily used in S3 requests and it
uses mutex, use atomic.Load instead since it is faster

Co-authored-by: Anis Elleuch <anis@min.io>
2021-01-04 09:40:30 -08:00
Anis Elleuch
dfd99b6d8f handlers: Little bit more optimizations (#11211) 2021-01-04 00:01:06 -08:00
Harshavardhana
c4b1d394d6 erasure: avoid io.Copy in hotpaths to reduce allocation (#11213) 2021-01-03 16:27:34 -08:00
Harshavardhana
c4131c2798 feat: Small object optimization read data in single bulk call (#11207) 2021-01-03 11:27:57 -08:00
Anis Elleuch
c9d502e6fa parentDirIsObject() to return quickly with inexistant parent (#11204)
Rewrite parentIsObject() function. Currently if a client uploads
a/b/c/d, we always check if c, b, a are actual objects or not.

The new code will check with the reverse order and quickly quit if 
the segment doesn't exist.

So if a, b, c in 'a/b/c' does not exist in the first place, then returns
false quickly.
2021-01-02 12:01:29 -08:00
Anis Elleuch
677e80c0f8 xl: Remove check-dir in ReadVersion (#11200)
The only purpose of check-dir flag in
ReadVersion is to return 404 when
an object has xl.meta but without data.

This is causing an extract call to the disk 
which can be penalizing in case of busy system
where disks receive many concurrent access.
2021-01-02 10:35:57 -08:00
Harshavardhana
aa85af4d1a fix: missing CopyObjectPart maxClients reorder 2021-01-01 23:07:37 -08:00
Anis Elleuch
ae731d232f trace: Reorder http/trace maxClients wrapping for correct tracing (#11202)
mc admin trace does not show the correct handler name in the output: it
is printing `maxClients' for all handlers. The reason is that the wrong
order of handler wrapping.
2021-01-01 23:06:07 -08:00
Anis Elleuch
a317d220ed xl-storage: Do not stat bucket assuming the object exists (#11201)
In HEAD/GET, only STAT the bucket if the 
object does not exist to return the correct
error response.
2021-01-01 09:44:36 -08:00
Harshavardhana
3e1221a01c fix: log once updating dataUsageCache versions (#11190)
also reduce usage of *bytes.Buffer for
reading `usage-cache.bin`
2020-12-31 09:45:09 -08:00
Baptiste Mille-Mathias
c1f6ca6697 Fix caddy project url (#11198) 2020-12-31 09:44:07 -08:00
Ritesh H Shukla
36fc2f98ed fix: admin trace throttled requests (#11192) 2020-12-30 21:04:55 -08:00
Ritesh H Shukla
556524c715 Reduce logging when peer is offline (#11184) 2020-12-30 14:38:54 -08:00
Harshavardhana
428f288379 update release Dockerfile string 2020-12-30 08:50:43 -08:00
0xflotus
cde801282d chore: enabled syntax highlighting in docs (#11182) 2020-12-29 17:38:28 -08:00
Ravind Kumar
6cf0008469 fix: docs typo in object lock docs (#11181) 2020-12-29 16:14:10 -08:00
Minio Trusted
7b0330a98c Update yaml files to latest version RELEASE.2020-12-29T23-29-29Z 2020-12-29 23:46:24 +00:00
Harshavardhana
cc457f1798 fix: enhance logging in crawler use console.Debug instead of logger.Info (#11179) 2020-12-29 01:57:28 -08:00
Harshavardhana
ca0d31b09a fix: re-arrange handlers to handle requests on /minio (#11177)
fixes #11175
2020-12-28 17:10:33 -08:00
Harshavardhana
445a9bd827 fix: heal optimizations in crawler to avoid multiple healing attempts (#11173)
Fixes two problems

- Double healing when bitrot is enabled, instead heal attempt
  once in applyActions() before lifecycle is applied.

- If applyActions() is successful and getSize() returns proper
  value, then object is accounted for and should be removed
  from the oldCache namespace map to avoid double heal attempts.
2020-12-28 10:31:00 -08:00
Harshavardhana
d8d25a308f fix: use HealObject for cleaning up dangling objects (#11171)
main reason is that HealObjects starts a recursive listing
for each object, this can be a really really long time on
large namespaces instead avoid recursive listing just
perform HealObject() instead at the prefix.

delete's already handle purging dangling content, we
don't need to achieve this by doing recursive listing,
this in-turn can delay crawling significantly.
2020-12-27 15:42:20 -08:00
Harshavardhana
c19e6ce773 avoid a crash in crawler when lifecycle is not initialized (#11170)
Bonus for static buffers use bytes.NewReader instead of
bytes.NewBuffer, to use a more reader friendly implementation
2020-12-26 22:58:06 -08:00
Minio Trusted
d3c853a3be Update yaml files to latest version RELEASE.2020-12-26T01-35-54Z 2020-12-26 01:53:30 +00:00
Harshavardhana
59d3639396 fix: inherit heal opts globally, including bitrot settings (#11166)
Bonus re-use ReadFileStream internal io.Copy buffers, fixes
lots of chatty allocations when reading metacache readers
with many sustained concurrent listing operations

```
   17.30GB  1.27% 84.80%    35.26GB  2.58%  io.copyBuffer
```
2020-12-24 23:04:03 -08:00
Harshavardhana
027e17468a fix: discarding results do not attempt in-memory metacache writer (#11163)
Optimizations include

- do not write the metacache block if the size of the
  block is '0' and it is the first block - where listing
  is attempted for a transient prefix, this helps to
  avoid creating lots of empty metacache entries for
  `minioMetaBucket`

- avoid the entire initialization sequence of cacheCh
  , metacacheBlockWriter if we are simply going to skip
  them when discardResults is set to true.

- No need to hold write locks while writing metacache
  blocks - each block is unique, per bucket, per prefix
  and also is written by a single node.
2020-12-24 15:02:02 -08:00
Harshavardhana
45ea161f8d webUI: change listing to 1000 keys from browser UI (#11159)
gateway implementations do not handle maxKeys being
`-1` properly unlike MinIO implementation, handle it
by setting an appropriate value.

fixes #11158
2020-12-23 19:58:15 -08:00
Poorna Krishnamoorthy
7b8a456f68 Update lifecycle README docs (#11160)
Removing reference to transition feature in docs as this
feature is being revamped to provide better extensibility
across different cloud targets.
2020-12-23 19:56:55 -08:00
Harshavardhana
b43906f6ee fix: docs typos and keywords 2020-12-23 11:59:20 -08:00
Harshavardhana
6a66f142d4 fix: strict quorum in list should list on all drives (#11157)
current implementation was incorrect, it in-fact
assumed only read quorum number of disks. in-fact
that value is only meant for read quorum good entries
from all online disks.

This PR fixes this behavior properly.
2020-12-23 09:26:40 -08:00
Harshavardhana
5982965839 fix: re-use bytes.Buffer using sync.Pool (#11156) 2020-12-22 23:22:37 -08:00
Minio Trusted
bfb92a27b7 Update yaml files to latest version RELEASE.2020-12-23T02-24-12Z 2020-12-23 02:43:25 +00:00
Harshavardhana
8565cefe4e fix: allow HTTP2.0 to be always configured 2020-12-22 16:32:58 -08:00
Andreas Auernhammer
8cdf2106b0 refactor cmd/crypto code for SSE handling and parsing (#11045)
This commit refactors the code in `cmd/crypto`
and separates SSE-S3, SSE-C and SSE-KMS.

This commit should not cause any behavior change
except for:
  - `IsRequested(http.Header)`

which now returns the requested type {SSE-C, SSE-S3,
SSE-KMS} and does not consider SSE-C copy headers.

However, SSE-C copy headers alone are anyway not valid.
2020-12-22 09:19:32 -08:00
Harshavardhana
35fafb837b fix: issues with handling delete markers in metacache (#11150)
Additional cases handled

- fix address situations where healing is not
  triggered on failed writes and deletes.

- consider object exists during listing when
  metadata can be successfully decoded.
2020-12-22 09:16:43 -08:00
Harshavardhana
274bbad5cb fix: select always online peers for remote listing (#11153)
always find the right set of online peers for remote listing,
this may have an effect on listing if the server is down - we
should do this to avoid always performing transient operations
on bucket->peerClient that is permanently or down for a long
period.
2020-12-22 09:16:07 -08:00
Harshavardhana
5c451d1690 update x/net/http2 to address few bugs (#11144)
additionally also configure http2 healthcheck
values to quickly detect unstable connections
and let them timeout.

also use single transport for proxying requests
2020-12-21 21:42:38 -08:00
Poorna Krishnamoorthy
c987313431 Encrypt remote target if kms is configured (#11034)
Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
2020-12-21 16:21:33 -08:00
Anis Elleuch
2ecaab55a6 admin: ServerInfo returns info without object layer initialized (#11142) 2020-12-21 09:35:19 -08:00
Harshavardhana
3e792ae2a2 fix: change defaults for DNS cache dialer (#11145) 2020-12-21 09:33:29 -08:00
Yingrong Zhao
6df6ac0f34 fix testMultipartUploadFailure to properly cleanup (#11137) 2020-12-21 08:23:27 -08:00
Harshavardhana
4cc500a041 normalize users with double // in accessKeys (#11143)
Bonus fix, use constant time compare for secret keys  in web-handlers.go:SetAuth()
2020-12-20 10:09:51 -08:00
Harshavardhana
d8e28830cf fix: allow STS creds for admin accounts to add users (#11138)
Allow rotating creds with privileges to add users

fixes https://github.com/minio/console/issues/529
2020-12-19 13:24:21 -08:00
Harshavardhana
3e16ec457a fix: support user/groups with '/' character (#11127)
NOTE: user/groups with `//` shall be normalized to `/`

fixes #11126
2020-12-19 09:36:37 -08:00
Harshavardhana
e5d378931d fix: delimiter based listing was broken without marker (#11136)
with missing nextMarker with delimiter based listing,
top level prefixes beyond 4500 or max-keys value
wouldn't be sent back for client to ask for the next
batch.

reproduced at a customer deployment, create prefixes
as shown below

```
for year in $(seq 2017 2020)
do
    for month in {01..12}
    do for day in {01..31}
       do
           mc -q cp file myminio/testbucket/dir/day_id=$year-$month-$day/;
       done
    done
done
```

Then perform

```
aws s3api --profile minio --endpoint-url http://localhost:9000 list-objects \
    --bucket testbucket --prefix dir/ --delimiter / --max-keys 1000
```

You shall see missing NextMarker, this would disallow listing beyond max-keys
requested and also disallow beyond 4500 (maxKeyObjectList) prefixes being listed
because client wouldn't know the NextMarker available.

This PR addresses this situation properly by making the implementation
more spec compatible. i.e NextMarker in-fact can be either an object, a prefix
with delimiter depending on the input operation.

This issue was introduced after the list caching changes and has been present
for a while.
2020-12-19 09:36:04 -08:00
Harshavardhana
6128304f6e fix: regression introduced in aws-sdk-go tests
regression was introduced by cce5d7152a
which incorrectly implemented the tests and fixed a wrong requirement,
CompleteMultipart should never succeed in the tests.
2020-12-18 21:14:27 -08:00
Anis Elleuch
e63a10e505 Profiling does not required object layer to be initialized (#11133) 2020-12-18 11:51:15 -08:00
Anis Elleuch
5434088c51 replication: Ensure to always use nano precision source modtime (#11135) 2020-12-18 11:37:28 -08:00
Harshavardhana
a773cf48d8 fix: overlapping object and prefix rejected (#11130)
fixes #11129
2020-12-18 08:51:09 -08:00
Minio Trusted
386dd56856 Update yaml files to latest version RELEASE.2020-12-18T03-27-42Z 2020-12-18 03:45:13 +00:00
Harshavardhana
f714840da7 add _MINIO_SERVER_DEBUG env for enabling debug messages (#11128) 2020-12-17 16:52:47 -08:00
Harshavardhana
7c9ef76f66 fix: timer deadlock on expired timers (#11124)
issue was introduced in #11106 the following
pattern

<-t.C // timer fired

if !t.Stop() {
   <-t.C // timer hangs
}

Seems to hang at the last `t.C` line, this
issue happens because a fired timer cannot be
Stopped() anymore and t.Stop() returns `false`
leading to confusing state of usage.

Refactor the code such that use timers appropriately
with exact requirements in place.
2020-12-17 12:35:02 -08:00
Anis Elleuch
cffdb01279 azure/s3 gateways: Pass ETag during GET call to avoid data corruption (#11024)
Both Azure & S3 gateways call for object information before returning
the stream of the object, however, the object content/length could be
modified meanwhile, which means it can return a corrupted object.

Use ETag to ensure that the object was not modified during the GET call
2020-12-17 09:11:14 -08:00
Harshavardhana
970ddb424b feat: treat /var/run/secrets/ on k8s as system cert directory (#11123)
consider `/var/run/secrets/kubernetes.io/serviceaccount`
as system cert directory for container platform.
2020-12-16 18:24:12 -08:00
Harshavardhana
b390a2a0b9 fix: reuser timers in erasure set hotpaths (#11106)
reuser timers in

 - connectDisks() monitoring
 - healMRFRoutine() channel timeouts
2020-12-16 14:33:05 -08:00
Yingrong Zhao
cce5d7152a fix testListMultipartUpload in aws-sdk-go (#11122)
Currently, the test doesn't complete nor abort the multipart upload
after tesing the list operation. It caused the `cleanup()` to fail since
the object can't not be deleted which causes the bucket deletion to
fail.

This PR adds a `CompleteMultipartUpload` call to commit the object so
the cleanup can successfully delete the object and bucket.
2020-12-16 13:24:10 -08:00
Harshavardhana
90158f1e33 fix: avoid logging for Heal APIs in FS mode (#11121)
fixes #11120
2020-12-16 09:46:13 -08:00
Minio Trusted
26624552be Update yaml files to latest version RELEASE.2020-12-16T05-05-17Z 2020-12-16 05:23:19 +00:00
Harshavardhana
c606c76323 fix: prioritized latest buckets for crawler to finish the scans faster (#11115)
crawler should only ListBuckets once not for each serverPool,
buckets are same across all pools, across sets and ListBuckets
always returns an unified view, once list buckets returns
sort it by create time to scan the latest buckets earlier
with the assumption that latest buckets would have lesser
content than older buckets allowing them to be scanned faster
and also to be able to provide more closer to latest view.
2020-12-15 17:34:54 -08:00
Harshavardhana
d674263eb7 fix: remove inode free as part of SameDisk check (#11107) 2020-12-15 11:39:38 -08:00
Klaus Post
e7d3b49a20 metacache: Make very small requests transient (#11109) 2020-12-15 11:25:36 -08:00
Harshavardhana
5df61ab96b fix: remove gorilla/rpc/ deps fully after our fork (#11108) 2020-12-15 11:18:06 -08:00
Poorna Krishnamoorthy
3456b03b12 Ignore ObjectNotFound errors in delete api while enforcing locking (#11114)
AWS does not report this or version not found as errors in the response.
2020-12-15 11:15:49 -08:00
Klaus Post
f6fb27e8f0 Don't copy interesting ids, clean up logging (#11102)
When searching the caches don't copy the ids, instead inline the loop.

```
Benchmark_bucketMetacache_findCache-32    	   19200	     63490 ns/op	    8303 B/op	       5 allocs/op
Benchmark_bucketMetacache_findCache-32    	   20338	     58609 ns/op	     111 B/op	       4 allocs/op
```

Add a reasonable, but still the simplistic benchmark.

Bonus - make nicer zero alloc logging
2020-12-14 13:13:33 -08:00
Harshavardhana
8368ab76aa fix: remove the requirement for healing buckets in ListBucketsHeal (#11098)
With new refactor of bucket healing, healing bucket happens
automatically including its metadata, there is no need to
redundant heal buckets also in ListBucketsHeal remove
it.
2020-12-14 12:07:07 -08:00
Harshavardhana
3e83643320 lifecycle improvements and additional debug logging (#11096)
Bonus change fix browser assets
2020-12-13 12:05:54 -08:00
Harshavardhana
2eb52ca5f4 fix: heal bucket metadata right before healing bucket (#11097)
optimization mainly to avoid listing the entire
`.minio.sys/buckets/.minio.sys` directory, this
can get really huge and comes in the way of startup
routines, contents inside `.minio.sys/buckets/.minio.sys`
are rather transient and not necessary to be healed.
2020-12-13 11:57:08 -08:00
Harshavardhana
705e196b6c upgrade event notification dependencies (#11095) 2020-12-12 20:31:28 -08:00
Andreas Auernhammer
7b5223d83d add vulnerability report policy (#11084) 2020-12-12 17:38:37 -08:00
Anis Elleuch
f164085227 xl: Always set root disk to true in test environment (#11094)
Tests environments (go test or manual testing) should always consider
the passed disks are root disks and should not rely on disk.IsRootDisk()
function. The reason is that this latter can return a false negative
when called in a busy system. However, returning a false negative will
only occur in a testing environment and not in a production, so we can
accept this trade-off for now.
2020-12-12 16:10:07 -08:00
Minio Trusted
31bf6f0c25 Update yaml files to latest version RELEASE.2020-12-12T08-39-07Z 2020-12-12 08:56:30 +00:00
529 changed files with 59773 additions and 14152 deletions

View File

@@ -1,2 +1,9 @@
.git
.github
docs
default.etcd
browser
*.gz
*.tar.gz
*.bzip2
*.zip

View File

@@ -10,9 +10,10 @@
## Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Optimization (provides speedup with no functional changes)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
## Checklist:
- [ ] Fixes a regression (If yes, please add `commit-id` or `PR #` here)
- [ ] Documentation needed
- [ ] Unit tests needed
- [ ] Documentation updated
- [ ] Unit tests added/updated

View File

@@ -11,8 +11,8 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.14.x, 1.15.x]
os: [ubuntu-latest, windows-latest]
go-version: [1.16.x]
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
@@ -21,12 +21,19 @@ jobs:
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Build on ${{ matrix.os }}
if: matrix.os == 'macos-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
run: |
make
make test-race
- name: Build on ${{ matrix.os }}
if: matrix.os == 'windows-latest'
env:
CGO_ENABLED: 0
GO111MODULE: on
MINIO_CI_CD: 1
run: |
go build --ldflags="-s -w" -o %GOPATH%\bin\minio.exe
go test -v --timeout 50m ./...
@@ -35,17 +42,14 @@ jobs:
env:
CGO_ENABLED: 0
GO111MODULE: on
MINIO_CI_CD: 1
run: |
sudo sysctl net.ipv6.conf.all.disable_ipv6=0
sudo sysctl net.ipv6.conf.default.disable_ipv6=0
sudo apt-get install devscripts shellcheck
nancy_version=$(curl --retry 10 -Ls -o /dev/null -w "%{url_effective}" https://github.com/sonatype-nexus-community/nancy/releases/latest | sed "s/https:\/\/github.com\/sonatype-nexus-community\/nancy\/releases\/tag\///")
curl -L -o nancy https://github.com/sonatype-nexus-community/nancy/releases/download/${nancy_version}/nancy-linux.amd64-${nancy_version} && chmod +x nancy
curl -L -o nancy https://github.com/sonatype-nexus-community/nancy/releases/download/${nancy_version}/nancy-${nancy_version}-linux-amd64 && chmod +x nancy
go list -m all | ./nancy sleuth
make
diff -au <(gofmt -s -d cmd) <(printf "")
diff -au <(gofmt -s -d pkg) <(printf "")
make test-race
make crosscompile
make verify

View File

@@ -17,6 +17,8 @@ linters:
- gosimple
- deadcode
- structcheck
- gomodguard
- gofmt
issues:
exclude-use-default: false

2239
CREDITS

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
FROM golang:1.15-alpine as builder
FROM golang:1.16-alpine as builder
LABEL maintainer="MinIO Inc <dev@min.io>"
@@ -15,6 +15,8 @@ FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_ROOT_USER_FILE=access_key \
MINIO_ROOT_PASSWORD_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key \
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav"

View File

@@ -1,4 +1,4 @@
FROM golang:1.15-alpine as builder
FROM golang:1.16-alpine as builder
LABEL maintainer="MinIO Inc <dev@min.io>"
@@ -17,6 +17,8 @@ ARG TARGETARCH
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_ROOT_USER_FILE=access_key \
MINIO_ROOT_PASSWORD_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key \
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav"

View File

@@ -6,19 +6,18 @@ LABEL maintainer="MinIO Inc <dev@min.io>"
COPY dockerscripts/docker-entrypoint.sh /usr/bin/
COPY minio /usr/bin/
COPY CREDITS /licenses/CREDITS
COPY LICENSE /licenses/LICENSE
ENV MINIO_UPDATE=off \
MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_ROOT_USER_FILE=access_key \
MINIO_ROOT_PASSWORD_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key
RUN \
microdnf update --nodocs && \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
microdnf clean all && \
RUN microdnf update --nodocs
RUN microdnf install curl ca-certificates shadow-utils util-linux --nodocs
RUN microdnf clean all && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh

View File

@@ -2,19 +2,23 @@ FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG TARGETARCH
ARG RELEASE
LABEL name="MinIO" \
vendor="MinIO Inc <dev@min.io>" \
maintainer="MinIO Inc <dev@min.io>" \
version="RELEASE.2020-11-25T22-36-25Z" \
release="RELEASE.2020-11-25T22-36-25Z" \
version="${RELEASE}" \
release="${RELEASE}" \
summary="MinIO is a High Performance Object Storage, API compatible with Amazon S3 cloud storage service." \
description="MinIO object storage is fundamentally different. Designed for performance and the S3 API, it is 100% open-source. MinIO is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads."
ENV MINIO_ACCESS_KEY_FILE=access_key \
MINIO_SECRET_KEY_FILE=secret_key \
MINIO_ROOT_USER_FILE=access_key \
MINIO_ROOT_PASSWORD_FILE=secret_key \
MINIO_KMS_MASTER_KEY_FILE=kms_master_key \
MINIO_SSE_MASTER_KEY_FILE=sse_master_key \
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav"
MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav"
COPY dockerscripts/verify-minio.sh /usr/bin/verify-minio.sh
COPY dockerscripts/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
@@ -26,9 +30,9 @@ RUN \
microdnf install curl ca-certificates shadow-utils util-linux --nodocs && \
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
microdnf install minisign --nodocs && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/minio -o /usr/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/minio.sha256sum -o /usr/bin/minio.sha256sum && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/minio.minisig -o /usr/bin/minio.minisig && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE} -o /usr/bin/minio && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.sha256sum -o /usr/bin/minio.sha256sum && \
curl -s -q https://dl.min.io/server/minio/release/linux-${TARGETARCH}/archive/minio.${RELEASE}.minisig -o /usr/bin/minio.minisig && \
microdnf clean all && \
chmod +x /usr/bin/minio && \
chmod +x /usr/bin/docker-entrypoint.sh && \

View File

@@ -17,38 +17,28 @@ checks:
getdeps:
@mkdir -p ${GOPATH}/bin
@which golangci-lint 1>/dev/null || (echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.27.0)
@which ruleguard 1>/dev/null || (echo "Installing ruleguard" && GO111MODULE=off go get github.com/quasilyte/go-ruleguard/...)
@which msgp 1>/dev/null || (echo "Installing msgp" && GO111MODULE=off go get github.com/tinylib/msgp)
@which stringer 1>/dev/null || (echo "Installing stringer" && GO111MODULE=off go get golang.org/x/tools/cmd/stringer)
@which msgp 1>/dev/null || (echo "Installing msgp" && go get github.com/tinylib/msgp@v1.1.3)
@which stringer 1>/dev/null || (echo "Installing stringer" && go get golang.org/x/tools/cmd/stringer)
crosscompile:
@(env bash $(PWD)/buildscripts/cross-compile.sh)
verifiers: getdeps fmt lint ruleguard check-gen
verifiers: getdeps lint check-gen
check-gen:
@go generate ./... >/dev/null
@(! git diff --name-only | grep '_gen.go$$') || (echo "Non-committed changes in auto-generated code is detected, please commit them to proceed." && false)
fmt:
@echo "Running $@ check"
@GO111MODULE=on gofmt -d cmd/
@GO111MODULE=on gofmt -d pkg/
lint:
@echo "Running $@ check"
@GO111MODULE=on ${GOPATH}/bin/golangci-lint cache clean
@GO111MODULE=on ${GOPATH}/bin/golangci-lint run --timeout=5m --config ./.golangci.yml
ruleguard:
@echo "Running $@ check"
@${GOPATH}/bin/ruleguard -rules ruleguard.rules.go github.com/minio/minio/...
@GO111MODULE=on ${GOPATH}/bin/golangci-lint run --build-tags kqueue --timeout=10m --config ./.golangci.yml
# Builds minio, runs the verifiers then runs the tests.
check: test
test: verifiers build
@echo "Running unit tests"
@GO111MODULE=on CGO_ENABLED=0 go test -tags kqueue ./... 1>/dev/null
@GOGC=25 GO111MODULE=on CGO_ENABLED=0 go test -tags kqueue ./... 1>/dev/null
test-race: verifiers build
@echo "Running unit tests under -race"
@@ -71,13 +61,18 @@ build: checks
@echo "Building minio binary to './minio'"
@GO111MODULE=on CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
hotfix: LDFLAGS := $(shell MINIO_RELEASE="RELEASE" MINIO_HOTFIX="hotfix" go run buildscripts/gen-ldflags.go $(shell git describe --tags --abbrev=0 | \
sed 's#RELEASE\.\([0-9]\+\)-\([0-9]\+\)-\([0-9]\+\)T\([0-9]\+\)-\([0-9]\+\)-\([0-9]\+\)Z#\1-\2-\3T\4:\5:\6Z#'))
hotfix: install
hotfix-vars:
$(eval LDFLAGS := $(shell MINIO_RELEASE="RELEASE" MINIO_HOTFIX="hotfix.$(shell git rev-parse --short HEAD)" go run buildscripts/gen-ldflags.go $(shell git describe --tags --abbrev=0 | \
sed 's#RELEASE\.\([0-9]\+\)-\([0-9]\+\)-\([0-9]\+\)T\([0-9]\+\)-\([0-9]\+\)-\([0-9]\+\)Z#\1-\2-\3T\4:\5:\6Z#')))
$(eval TAG := "minio/minio:$(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD)")
hotfix: hotfix-vars install
docker: checks
docker-hotfix: hotfix checks
@echo "Building minio docker image '$(TAG)'"
@docker build -t $(TAG) . -f Dockerfile.dev
docker: build checks
@echo "Building minio docker image '$(TAG)'"
@GOOS=linux GO111MODULE=on CGO_ENABLED=0 go build -tags kqueue -trimpath --ldflags "$(LDFLAGS)" -o $(PWD)/minio 1>/dev/null
@docker build -t $(TAG) . -f Dockerfile.dev
# Builds minio and installs it to $GOPATH/bin.

222
README.md
View File

@@ -5,80 +5,175 @@
MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
## Docker Container
### Stable
```
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio server /data
This README provides quickstart instructions on running MinIO on baremetal hardware, including Docker-based installations. For Kubernetes environments,
use the [MinIO Kubernetes Operator](https://github.com/minio/operator/blob/master/README.md).
# Docker Installation
Use the following commands to run a standalone MinIO server on a Docker container.
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Quickstart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html)
for more complete documentation.
## Stable
Run the following command to run the latest stable image of MinIO on a Docker container using an ephemeral data volume:
```sh
docker run -p 9000:9000 minio/minio server /data
```
### Edge
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
> NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the
`docker -v` option. For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the Docker container.
## Edge
Run the following command to run the bleeding-edge image of MinIO on a Docker container using an ephemeral data volume:
```
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio:edge server /data
docker run -p 9000:9000 minio/minio:edge server /data
```
> NOTE: Docker will not display the default keys unless you start the container with the `-it`(interactive TTY) argument. Generally, it is not recommended to use default keys with containers. Please visit MinIO Docker quickstart guide for more information [here](https://docs.min.io/docs/minio-docker-quickstart-guide)
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
> NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the
`docker -v` option. For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the Docker container.
# macOS
Use the following commands to run a standalone MinIO server on macOS.
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Quickstart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html)
for more complete documentation.
## Homebrew (recommended)
Run the following command to install the latest stable MinIO package using [Homebrew](https://brew.sh/). Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
## macOS
### Homebrew (recommended)
Install minio packages using [Homebrew](https://brew.sh/)
```sh
brew install minio/stable/minio
minio server /data
```
> NOTE: If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
```sh
brew uninstall minio
brew install minio/stable/minio
```
### Binary Download
| Platform | Architecture | URL |
| ---------- | -------- | ------ |
| Apple macOS | 64-bit Intel | https://dl.min.io/server/minio/release/darwin-amd64/minio |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
## Binary Download
Use the following command to download and run a standalone MinIO server on macOS. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
chmod 755 minio
wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data
```
## GNU/Linux
### Binary Download
| Platform | Architecture | URL |
| ---------- | -------- | ------ |
| GNU/Linux | 64-bit Intel | https://dl.min.io/server/minio/release/linux-amd64/minio |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
# GNU/Linux
Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data
```
| Platform | Architecture | URL |
| ---------- | -------- | ------ |
| GNU/Linux | ppc64le | https://dl.min.io/server/minio/release/linux-ppc64le/minio |
Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
| -------- | ------ |
| 64-bit Intel/AMD | https://dl.min.io/server/minio/release/linux-amd64/minio |
| 64-bit ARM | https://dl.min.io/server/minio/release/linux-arm64/minio |
| 64-bit PowerPC LE (ppc64le) | https://dl.min.io/server/minio/release/linux-ppc64le/minio |
| IBM Z-Series (S390X) | https://dl.min.io/server/minio/release/linux-s390x/minio |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Quickstart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html)
for more complete documentation.
# Microsoft Windows
To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:
```sh
wget https://dl.min.io/server/minio/release/linux-ppc64le/minio
chmod +x minio
./minio server /data
https://dl.min.io/server/minio/release/windows-amd64/minio.exe
```
## Microsoft Windows
### Binary Download
| Platform | Architecture | URL |
| ---------- | -------- | ------ |
| Microsoft Windows | 64-bit | https://dl.min.io/server/minio/release/windows-amd64/minio.exe |
Use the following command to run a standalone MinIO server on the Windows host. Replace ``D:\`` with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the ``minio.exe`` executable, *or* add the path to that directory to the system ``$PATH``:
```sh
minio.exe server D:\Photos
minio.exe server D:\
```
## FreeBSD
### Port
Install minio packages using [pkg](https://github.com/freebsd/pkg), MinIO doesn't officially build FreeBSD binaries but is maintained by FreeBSD upstream [here](https://www.freshports.org/www/minio).
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Quickstart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html)
for more complete documentation.
# FreeBSD
MinIO does not provide an official FreeBSD binary. However, FreeBSD maintains an [upstream release](https://www.freshports.org/www/minio) using [pkg](https://github.com/freebsd/pkg):
```sh
pkg install minio
@@ -87,13 +182,32 @@ sysrc minio_disks=/home/user/Photos
service minio start
```
## Install from Source
Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.15](https://golang.org/dl/#stable)
# Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.16](https://golang.org/dl/#stable)
```sh
GO111MODULE=on go get github.com/minio/minio
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Browser, an embedded
web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers,
see https://docs.min.io/docs/ and click **MINIO SDKS** in the navigation to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically,
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Quickstart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html)
for more complete documentation.
MinIO strongly recommends *against* using compiled-from-source MinIO servers for production environments.
# Deployment Recommendations
## Allow port access for Firewalls
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
@@ -149,6 +263,13 @@ iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
## Pre-existing data
When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command `minio server /mnt/data`, any pre-existing data in the `/mnt/data` directory would be accessible to the clients.
The above statement is also valid for all gateway backends.
# Test MinIO Connectivity
## Test using MinIO Browser
MinIO Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 to ensure your server has started successfully.
@@ -157,12 +278,7 @@ MinIO Server comes with an embedded web based object browser. Point your web bro
## Test using MinIO Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client [Quickstart Guide](https://docs.min.io/docs/minio-client-quickstart-guide) for further instructions.
## Pre-existing data
When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command `minio server /mnt/data`, any pre-existing data in the `/mnt/data` directory would be accessible to the clients.
The above statement is also valid for all gateway backends.
## Upgrading MinIO
# Upgrading MinIO
MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use [`mc admin update`](https://docs.min.io/docs/minio-admin-complete-guide.html#update) from the client. This will update all the nodes in the cluster simultaneously and restart them, as shown in the following command from the MinIO client (mc):
```
@@ -171,7 +287,7 @@ mc admin update <minio alias, e.g., myminio>
> NOTE: some releases might not allow rolling upgrades, this is always called out in the release notes and it is generally advised to read release notes before upgrading. In such a situation `mc admin update` is the recommended upgrading mechanism to upgrade all servers at once.
### Important things to remember during MinIO upgrades
## Important things to remember during MinIO upgrades
- `mc admin update` will only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at `/usr/local/bin/minio`, you would need write access to `/usr/local/bin`.
- `mc admin update` updates and restarts all servers simultaneously, applications would retry and continue their respective operations upon upgrade.
@@ -181,7 +297,7 @@ mc admin update <minio alias, e.g., myminio>
- If using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
- If using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md
## Explore Further
# Explore Further
- [MinIO Erasure Code QuickStart Guide](https://docs.min.io/docs/minio-erasure-code-quickstart-guide)
- [Use `mc` with MinIO Server](https://docs.min.io/docs/minio-client-quickstart-guide)
- [Use `aws-cli` with MinIO Server](https://docs.min.io/docs/aws-cli-with-minio)
@@ -189,8 +305,8 @@ mc admin update <minio alias, e.g., myminio>
- [Use `minio-go` SDK with MinIO Server](https://docs.min.io/docs/golang-client-quickstart-guide)
- [The MinIO documentation website](https://docs.min.io)
## Contribute to MinIO Project
# Contribute to MinIO Project
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
## License
# License
Use of MinIO is governed by the Apache 2.0 License found at [LICENSE](https://github.com/minio/minio/blob/master/LICENSE).

View File

@@ -8,16 +8,16 @@ MinIO是一个非常轻量的服务,可以很简单的和其他应用的结合
### 稳定版
```
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
-e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio server /data
```
### 尝鲜版
```
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
-e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio:edge server /data
```
@@ -78,7 +78,7 @@ minio.exe server D:\Photos
## FreeBSD
### Port
使用 [pkg](https://github.com/freebsd/pkg)进行安装,, MinIO官方并没有提供FreeBSD二进制文件 它由FreeBSD上游维护点击 [这里](https://www.freshports.org/www/minio)查看。
使用 [pkg](https://github.com/freebsd/pkg)进行安装MinIO官方并没有提供FreeBSD二进制文件 它由FreeBSD上游维护点击 [这里](https://www.freshports.org/www/minio)查看。
```sh
pkg install minio
@@ -89,7 +89,7 @@ service minio start
## 使用源码安装
采用源码安装仅供开发人员和高级用户使用,如果你还没有Golang环境 请参考 [How to install Golang](https://golang.org/doc/install)。最低需要Golang版本为 [go1.14](https://golang.org/dl/#stable)
采用源码安装仅供开发人员和高级用户使用,如果你还没有Golang环境 请参考 [How to install Golang](https://golang.org/doc/install)。最低需要Golang版本为 [go1.16](https://golang.org/dl/#stable)
```sh
GO111MODULE=on go get github.com/minio/minio
@@ -179,7 +179,7 @@ mc admin update <minio alias, e.g., myminio>
- 对于联盟部署模式,应分别针对每个群集运行`mc admin update`。 在成功更新所有群集之前,不要将`mc`更新为任何新版本。
- 如果将`kes`用作MinIO的KMS只需替换二进制文件并重新启动`kes`,可以在 [这里](https://github.com/minio/kes/wiki) 找到有关`kes`的更多信息。
- 如果将Vault作为MinIO的KMS请确保已遵循如下Vault升级过程的概述https://www.vaultproject.io/docs/upgrading/index.html
- 如果将MindIO与etcd配合使用, 请确保已遵循如下etcd升级过程的概述: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md
- 如果将MinIO与etcd配合使用, 请确保已遵循如下etcd升级过程的概述: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md
## 了解更多
- [MinIO纠删码入门](https://docs.min.io/docs/minio-erasure-code-quickstart-guide)
@@ -190,7 +190,7 @@ mc admin update <minio alias, e.g., myminio>
- [MinIO文档](https://docs.min.io)
## 如何参与到MinIO项目
请参考 [贡献者指南](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)。欢迎各位中国程序员加到MinIO项目中。
请参考 [贡献者指南](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)。欢迎各位中国程序员加到MinIO项目中。
## 授权许可
MinIO的使用受 Apache 2.0 License 约束,你可以在 [LICENSE](./LICENSE) 查看许可。

39
VULNERABILITY_REPORT.md Normal file
View File

@@ -0,0 +1,39 @@
## Vulnerability Management Policy
This document formally describes the process of addressing and managing a
reported vulnerability that has been found in the MinIO server code base,
any directly connected ecosystem component or a direct / indirect dependency
of the code base.
### Scope
The vulnerability management policy described in this document covers the
process of investigating, assessing and resolving a vulnerability report
opened by a MinIO employee or an external third party.
Therefore, it lists pre-conditions and actions that should be performed to
resolve and fix a reported vulnerability.
### Vulnerability Management Process
The vulnerability management process requires that the vulnerability report
contains the following information:
- The project / component that contains the reported vulnerability.
- A description of the vulnerability. In particular, the type of the
reported vulnerability and how it might be exploited. Alternatively,
a well-established vulnerability identifier, e.g. CVE number, can be
used instead.
Based on the description mentioned above, a MinIO engineer or security team
member investigates:
- Whether the reported vulnerability exists.
- The conditions that are required such that the vulnerability can be exploited.
- The steps required to fix the vulnerability.
In general, if the vulnerability exists in one of the MinIO code bases
itself - not in a code dependency - then MinIO will, if possible, fix
the vulnerability or implement reasonable countermeasures such that the
vulnerability cannot be exploited anymore.

1
browser/.gitignore vendored
View File

@@ -17,4 +17,3 @@ release
*.syso
coverage.txt
node_modules
production

View File

@@ -17,24 +17,13 @@ nvm install stable
npm install
```
### Install `go-bindata` and `go-bindata-assetfs`
If you do not have a working Golang environment, please follow [Install Golang](https://golang.org/doc/install)
```sh
go get github.com/go-bindata/go-bindata/go-bindata
go get github.com/elazarl/go-bindata-assetfs/go-bindata-assetfs
```
## Generating Assets
### Generate ui-assets.go
```sh
npm run release
```
This generates ui-assets.go in the current directory. Now do `make` in the parent directory to build the minio binary with the newly generated ``ui-assets.go``
This generates `production` in the current directory.
## Run MinIO Browser with live reload

View File

@@ -24,9 +24,6 @@ jest.mock("jwt-decode")
jwtDecode.mockImplementation(() => ({ sub: "minio" }))
jest.mock("../../web", () => ({
GenerateAuth: jest.fn(() => {
return Promise.resolve({ accessKey: "gen1", secretKey: "gen2" })
}),
SetAuth: jest.fn(
({ currentAccessKey, currentSecretKey, newAccessKey, newSecretKey }) => {
if (

View File

@@ -26,6 +26,7 @@ import {
SHARE_OBJECT_EXPIRY_HOURS,
SHARE_OBJECT_EXPIRY_MINUTES
} from "../constants"
import QRCode from "react-qr-code";
export class ShareObjectModal extends React.Component {
constructor(props) {
@@ -89,6 +90,7 @@ export class ShareObjectModal extends React.Component {
<ModalHeader>Share Object</ModalHeader>
<ModalBody>
<div className="input-group copy-text">
<QRCode value={url} size={128}/>
<label>Shareable Link</label>
<input
type="text"

View File

@@ -138,9 +138,10 @@ describe("Uploads actions", () => {
objects: { currentPrefix: "pre1/" }
})
store.dispatch(uploadsActions.uploadFile(file))
const objectPath = encodeURIComponent("pre1/file1")
expect(open).toHaveBeenCalledWith(
"PUT",
"https://localhost:8080/upload/test1/pre1/file1",
"https://localhost:8080/upload/test1/" + objectPath,
true
)
expect(send).toHaveBeenCalledWith(file)

View File

@@ -94,7 +94,7 @@ export const uploadFile = file => {
_filePath = _filePath.substring(1)
}
const filePath = _filePath
const objectName = `${currentPrefix}${filePath}`
const objectName = encodeURIComponent(`${currentPrefix}${filePath}`)
const uploadUrl = `${
window.location.origin
}${minioBrowserPrefix}/upload/${currentBucket}/${objectName}`

View File

@@ -75,6 +75,11 @@
border-color: darken(@input-border, 5%);
}
}
svg {
display: block;
margin: 0 auto 5px;
}
}
/*--------------------------
@@ -150,4 +155,4 @@
100% {
transform: rotate(360deg);
}
}
}

11
browser/assets.go Normal file
View File

@@ -0,0 +1,11 @@
package browser
import "embed"
//go:embed production/*
var fs embed.FS
// GetStaticAssets returns assets
func GetStaticAssets() embed.FS {
return fs
}

View File

@@ -14,19 +14,11 @@
* limitations under the License.
*/
var moment = require('moment')
var async = require('async')
var exec = require('child_process').exec
var fs = require('fs')
var isProduction = process.env.NODE_ENV == 'production' ? true : false
var assetsFileName = ''
var commitId = ''
var date = moment.utc()
var version = date.format('YYYY-MM-DDTHH:mm:ss') + 'Z'
var releaseTag = date.format('YYYY-MM-DDTHH-mm-ss') + 'Z'
var buildType = 'DEVELOPMENT'
if (process.env.MINIO_UI_BUILD) buildType = process.env.MINIO_UI_BUILD
rmDir = function(dirPath) {
try { var files = fs.readdirSync(dirPath); }
@@ -53,74 +45,6 @@ async.waterfall([
console.log('Running', cmd)
exec(cmd, cb)
},
function(stdout, stderr, cb) {
if (isProduction) {
fs.renameSync('production/index_bundle.js',
'production/index_bundle-' + releaseTag + '.js')
} else {
fs.renameSync('dev/index_bundle.js',
'dev/index_bundle-' + releaseTag + '.js')
}
var cmd = 'git log --format="%H" -n1'
console.log('Running', cmd)
exec(cmd, cb)
},
function(stdout, stderr, cb) {
if (!stdout) throw new Error('commitId is empty')
commitId = stdout.replace('\n', '')
if (commitId.length !== 40) throw new Error('commitId invalid : ' + commitId)
assetsFileName = 'ui-assets.go';
var cmd = 'go-bindata-assetfs -o bindata_assetfs.go -pkg browser -nocompress=true production/...'
if (!isProduction) {
cmd = 'go-bindata-assetfs -o bindata_assetfs.go -pkg browser -nocompress=true dev/...'
}
console.log('Running', cmd)
exec(cmd, cb)
},
function(stdout, stderr, cb) {
var cmd = 'gofmt -s -w -l bindata_assetfs.go'
console.log('Running', cmd)
exec(cmd, cb)
},
function(stdout, stderr, cb) {
fs.renameSync('bindata_assetfs.go', assetsFileName)
fs.appendFileSync(assetsFileName, '\n')
fs.appendFileSync(assetsFileName, 'var UIReleaseTag = "' + buildType + '.' +
releaseTag + '"\n')
fs.appendFileSync(assetsFileName, 'var UICommitID = "' + commitId + '"\n')
fs.appendFileSync(assetsFileName, 'var UIVersion = "' + version + '"')
fs.appendFileSync(assetsFileName, '\n')
var contents;
if (isProduction) {
contents = fs.readFileSync(assetsFileName, 'utf8')
.replace(/_productionIndexHtml/g, '_productionIndexHTML')
.replace(/productionIndexHtmlBytes/g, 'productionIndexHTMLBytes')
.replace(/productionIndexHtml/g, 'productionIndexHTML')
.replace(/_productionIndex_bundleJs/g, '_productionIndexBundleJs')
.replace(/productionIndex_bundleJsBytes/g, 'productionIndexBundleJsBytes')
.replace(/productionIndex_bundleJs/g, 'productionIndexBundleJs')
.replace(/_productionJqueryUiMinJs/g, '_productionJqueryUIMinJs')
.replace(/productionJqueryUiMinJsBytes/g, 'productionJqueryUIMinJsBytes')
.replace(/productionJqueryUiMinJs/g, 'productionJqueryUIMinJs');
} else {
contents = fs.readFileSync(assetsFileName, 'utf8')
.replace(/_devIndexHtml/g, '_devIndexHTML')
.replace(/devIndexHtmlBytes/g, 'devIndexHTMLBytes')
.replace(/devIndexHtml/g, 'devIndexHTML')
.replace(/_devIndex_bundleJs/g, '_devIndexBundleJs')
.replace(/devIndex_bundleJsBytes/g, 'devIndexBundleJsBytes')
.replace(/devIndex_bundleJs/g, 'devIndexBundleJs')
.replace(/_devJqueryUiMinJs/g, '_devJqueryUIMinJs')
.replace(/devJqueryUiMinJsBytes/g, 'devJqueryUIMinJsBytes')
.replace(/devJqueryUiMinJs/g, 'devJqueryUIMinJs');
}
contents = contents.replace(/MINIO_UI_VERSION/g, version)
contents = contents.replace(/index_bundle.js/g, 'index_bundle-' + releaseTag + '.js')
fs.writeFileSync(assetsFileName, contents, 'utf8')
console.log('UI assets file :', assetsFileName)
cb()
}
], function(err) {
if (err) return console.log(err)
})

32460
browser/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,7 @@
"test": "jest",
"dev": "NODE_ENV=dev webpack-dev-server --devtool cheap-module-eval-source-map --progress --colors --hot --content-base dev",
"build": "NODE_ENV=dev node build.js",
"release": "NODE_ENV=production MINIO_UI_BUILD=RELEASE node build.js",
"release": "NODE_ENV=production node build.js",
"format": "esformatter -i 'app/**/*.js'"
},
"jest": {
@@ -84,6 +84,7 @@
"react-dropzone": "^11.0.1",
"react-infinite-scroller": "^1.2.4",
"react-onclickout": "^2.0.8",
"react-qr-code": "^1.1.1",
"react-redux": "^5.1.2",
"react-router-dom": "^5.2.0",
"redux": "^4.0.5",

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

View File

@@ -0,0 +1,59 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>MinIO Browser</title>
<link rel="icon" type="image/png" sizes="32x32" href="/minio/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="96x96" href="/minio/favicon-96x96.png">
<link rel="icon" type="image/png" sizes="16x16" href="/minio/favicon-16x16.png">
<link rel="stylesheet" href="/minio/loader.css" type="text/css">
</head>
<body>
<div class="page-load">
<div class="pl-inner">
<img src="/minio/logo.svg" alt="">
</div>
</div>
<div id="root"></div>
<!--[if lt IE 11]>
<div class="ie-warning">
<div class="iw-inner">
<i class="iwi-icon fas fa-exclamation-triangle"></i>
You are using Internet Explorer version 12.0 or lower. Due to security issues and lack of support for Web Standards it is highly recommended that you upgrade to a modern browser
<ul>
<li>
<a href="http://www.google.com/chrome/">
<img src="chrome.png" alt="">
<div>Chrome</div>
</a>
</li>
<li>
<a href="https://www.mozilla.org/en-US/firefox/new/">
<img src="firefox.png" alt="">
<div>Firefox</div>
</a>
</li>
<li>
<a href="https://www.apple.com/safari/">
<img src="safari.png" alt="">
<div>Safari</div>
</a>
</li>
</ul>
<div class="iwi-skip">Skip & Continue</div>
</div>
</div>
<![endif]-->
<script>currentUiVersion = 'MINIO_UI_VERSION'</script>
<script src="/minio/index_bundle.js"></script>
</body>
</html>

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,98 @@
.page-load {
position: fixed;
width: 100%;
height: 100%;
top: 0;
left: 0;
background: #002a37;
z-index: 100;
transition: opacity 200ms;
-webkit-transition: opacity 200ms;
}
.pl-0{
opacity: 0;
}
.pl-1 {
display: none;
}
.pl-inner {
position: absolute;
width: 100px;
height: 100px;
left: 50%;
margin-left: -50px;
top: 50%;
margin-top: -50px;
text-align: center;
-webkit-animation: fade-in 500ms;
animation: fade-in 500ms;
-webkit-animation-fill-mode: both;
animation-fill-mode: both;
animation-delay: 350ms;
-webkit-animation-delay: 350ms;
-webkit-backface-visibility: visible;
backface-visibility: visible;
}
.pl-inner:before {
content: '';
position: absolute;
width: 100%;
height: 100%;
left: 0;
top: 0;
display: block;
-webkit-animation: spin 1000ms infinite linear;
animation: spin 1000ms infinite linear;
border: 1px solid rgba(255, 255, 255, 0.2);;
border-left-color: #fff;
border-radius: 50%;
}
.pl-inner > img {
width: 30px;
margin-top: 21px;
}
@-webkit-keyframes fade-in {
0% {
opacity: 0;
}
100% {
opacity: 1;
}
}
@keyframes fade-in {
0% {
opacity: 0;
}
100% {
opacity: 1;
}
}
@-webkit-keyframes spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}
@keyframes spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="93px" height="187px" viewBox="0 0 93 187" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<!-- Generator: Sketch 48.2 (47327) - http://www.bohemiancoding.com/sketch -->
<title>logo</title>
<desc>Created with Sketch.</desc>
<defs></defs>
<g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="logo" transform="translate(0.187500, -0.683594)" fill="#FFFFFF" fill-rule="nonzero">
<path d="M91.49,46.551 C86.7827023,38.7699609 82.062696,30.9966172 77.33,23.231 C74.87,19.231 72.33,15.231 69.88,11.231 C69.57,10.731 69.18,10.291 68.88,9.831 C64.35,2.931 55.44,-1.679 46.73,2.701 C42.9729806,4.51194908 40.0995718,7.75449451 38.7536428,11.7020516 C37.4077139,15.6496086 37.701799,19.9721186 39.57,23.701 C41.08,26.641 43.57,29.121 45.91,31.581 C53.03,39.141 60.38,46.491 67.45,54.111 C72.4175495,59.4492221 74.4526451,66.8835066 72.8965704,74.0075359 C71.3404956,81.1315653 66.390952,87.0402215 59.65,89.821 C59.4938176,89.83842 59.3361824,89.83842 59.18,89.821 L59.18,54.591 C46.6388051,61.0478363 35.3944735,69.759905 26.01,80.291 C11.32,96.671 2.64,117.141 0.01,132.071 L23.96,119.821 C31.96,115.771 39.86,111.821 48.14,107.581 L48.14,175.921 L59.14,187.131 L59.14,101.831 C59.14,101.831 59.39,101.711 60.22,101.261 C63.5480598,99.6738911 66.7772674,97.8873078 69.89,95.911 C77.7130888,90.4306687 82.7479457,81.8029342 83.6709542,72.295947 C84.5939627,62.7889599 81.3127806,53.3538429 74.69,46.471 C66.49,37.891 58.24,29.351 50.05,20.761 C47.67,18.261 47.72,15.101 50.05,12.881 C52.38,10.661 55.56,10.881 57.96,13.331 L61.38,16.781 C64.1,19.681 66.79,22.611 69.53,25.481 C76.4547149,32.7389629 83.3947303,39.9823123 90.35,47.211 C90.7,47.571 91.12,47.871 91.5,48.211 L91.93,47.951 C91.8351945,47.4695902 91.6876376,47.0000911 91.49,46.551 Z M48.11,94.931 C47.9883217,95.5022568 47.6230065,95.9917791 47.11,96.271 C42.72,98.601 38.29,100.871 33.87,103.141 L17.76,111.401 C24.771203,96.7435071 35.1132853,83.9289138 47.96,73.981 C48.08,74.221 48.16,74.301 48.16,74.381 C48.15,81.231 48.17,88.081 48.11,94.931 Z" id="Shape"></path>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

File diff suppressed because one or more lines are too long

View File

@@ -21,7 +21,7 @@ _init() {
## Minimum required versions for build dependencies
GIT_VERSION="1.0"
GO_VERSION="1.13"
GO_VERSION="1.16"
OSX_VERSION="10.8"
KNAME=$(uname -s)
ARCH=$(uname -m)

View File

@@ -9,7 +9,7 @@ function _init() {
export CGO_ENABLED=0
## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/arm64 linux/s390x darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64"
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips"
}
function _build() {

View File

@@ -1,69 +0,0 @@
#!/bin/bash
#
# MinIO Cloud Storage, (C) 2019 MinIO, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
set -E
set -o pipefail
function start_minio_server()
{
MINIO_ACCESS_KEY=minio MINIO_SECRET_KEY=minio123 \
minio --quiet --json server /data --address 127.0.0.1:24242 > server.log 2>&1 &
server_pid=$!
sleep 10
echo "$server_pid"
}
function start_minio_gateway_s3()
{
MINIO_ACCESS_KEY=minio MINIO_SECRET_KEY=minio123 \
minio --quiet --json gateway s3 http://127.0.0.1:24242 \
--address 127.0.0.1:24240 > gateway.log 2>&1 &
gw_pid=$!
sleep 10
echo "$gw_pid"
}
function main()
{
sr_pid="$(start_minio_server)"
gw_pid="$(start_minio_gateway_s3)"
SERVER_ENDPOINT=127.0.0.1:24240 ENABLE_HTTPS=0 ACCESS_KEY=minio \
SECRET_KEY=minio123 MINT_MODE="full" /mint/entrypoint.sh \
aws-sdk-go aws-sdk-java aws-sdk-php aws-sdk-ruby awscli \
healthcheck mc minio-dotnet minio-js \
minio-py s3cmd s3select security
rv=$?
kill "$sr_pid"
kill "$gw_pid"
sleep 3
if [ "$rv" -ne 0 ]; then
echo "=========== Gateway ==========="
cat "gateway.log"
echo "=========== Server ==========="
cat "server.log"
fi
rm -f gateway.log server.log
}
main "$@"

View File

@@ -3,5 +3,5 @@
set -e
for d in $(go list ./... | grep -v browser); do
CGO_ENABLED=1 go test -v -race --timeout 50m "$d"
CGO_ENABLED=1 go test -v -tags kqueue -race --timeout 100m "$d"
done

View File

@@ -33,6 +33,7 @@ export ACCESS_KEY="minio"
export SECRET_KEY="minio123"
export ENABLE_HTTPS=0
export GO111MODULE=on
export GOGC=25
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" )
@@ -63,8 +64,8 @@ function start_minio_erasure_sets()
function start_minio_pool_erasure_sets()
{
export MINIO_ACCESS_KEY=$ACCESS_KEY
export MINIO_SECRET_KEY=$SECRET_KEY
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/pool-disk-sets{1...4} http://127.0.0.1:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address ":9000" > "$WORK_DIR/pool-minio-9000.log" 2>&1 &
"${MINIO[@]}" server --address ":9001" > "$WORK_DIR/pool-minio-9001.log" 2>&1 &
@@ -74,8 +75,8 @@ function start_minio_pool_erasure_sets()
function start_minio_pool_erasure_sets_ipv6()
{
export MINIO_ACCESS_KEY=$ACCESS_KEY
export MINIO_SECRET_KEY=$SECRET_KEY
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://[::1]:9000${WORK_DIR}/pool-disk-sets{1...4} http://[::1]:9001${WORK_DIR}/pool-disk-sets{5...8}"
"${MINIO[@]}" server --address="[::1]:9000" > "$WORK_DIR/pool-minio-ipv6-9000.log" 2>&1 &
"${MINIO[@]}" server --address="[::1]:9001" > "$WORK_DIR/pool-minio-ipv6-9001.log" 2>&1 &
@@ -85,8 +86,8 @@ function start_minio_pool_erasure_sets_ipv6()
function start_minio_dist_erasure()
{
export MINIO_ACCESS_KEY=$ACCESS_KEY
export MINIO_SECRET_KEY=$SECRET_KEY
export MINIO_ROOT_USER=$ACCESS_KEY
export MINIO_ROOT_PASSWORD=$SECRET_KEY
export MINIO_ENDPOINTS="http://127.0.0.1:9000${WORK_DIR}/dist-disk1 http://127.0.0.1:9001${WORK_DIR}/dist-disk2 http://127.0.0.1:9002${WORK_DIR}/dist-disk3 http://127.0.0.1:9003${WORK_DIR}/dist-disk4"
for i in $(seq 0 3); do
"${MINIO[@]}" server --address ":900${i}" > "$WORK_DIR/dist-minio-900${i}.log" 2>&1 &

View File

@@ -28,9 +28,11 @@ WORK_DIR="$PWD/.verify-$RANDOM"
MINIO_CONFIG_DIR="$WORK_DIR/.minio"
MINIO=( "$PWD/minio" --config-dir "$MINIO_CONFIG_DIR" server )
export GOGC=25
function start_minio_3_node() {
export MINIO_ACCESS_KEY=minio
export MINIO_SECRET_KEY=minio123
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
export MINIO_ERASURE_SET_DRIVE_COUNT=6
start_port=$(shuf -i 10000-65000 -n 1)

View File

@@ -20,7 +20,6 @@ import (
"encoding/xml"
"io"
"net/http"
"net/url"
"github.com/gorilla/mux"
xhttp "github.com/minio/minio/cmd/http"
@@ -61,7 +60,7 @@ type accessControlPolicy struct {
func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketACL")
defer logger.AuditLog(w, r, "PutBucketACL", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -125,7 +124,7 @@ func (api objectAPIHandlers) PutBucketACLHandler(w http.ResponseWriter, r *http.
func (api objectAPIHandlers) GetBucketACLHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketACL")
defer logger.AuditLog(w, r, "GetBucketACL", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -176,11 +175,11 @@ func (api objectAPIHandlers) GetBucketACLHandler(w http.ResponseWriter, r *http.
func (api objectAPIHandlers) PutObjectACLHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutObjectACL")
defer logger.AuditLog(w, r, "PutObjectACL", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
object, err := url.PathUnescape(vars["object"])
object, err := unescapePath(vars["object"])
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
@@ -240,11 +239,11 @@ func (api objectAPIHandlers) PutObjectACLHandler(w http.ResponseWriter, r *http.
func (api objectAPIHandlers) GetObjectACLHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetObjectACL")
defer logger.AuditLog(w, r, "GetObjectACL", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
object, err := url.PathUnescape(vars["object"])
object, err := unescapePath(vars["object"])
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return

View File

@@ -23,9 +23,7 @@ import (
"net/http"
"github.com/gorilla/mux"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/env"
iampolicy "github.com/minio/minio/pkg/iam/policy"
"github.com/minio/minio/pkg/madmin"
)
@@ -43,7 +41,7 @@ const (
func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketQuotaConfig")
defer logger.AuditLog(w, r, "PutBucketQuotaConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.SetBucketQuotaAdminAction)
if objectAPI == nil {
@@ -54,12 +52,6 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
vars := mux.Vars(r)
bucket := vars["bucket"]
// Turn off quota commands if data usage info is unavailable.
if env.Get(envDataUsageCrawlConf, config.EnableOn) == config.EnableOff {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminBucketQuotaDisabled), r.URL)
return
}
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
@@ -89,7 +81,7 @@ func (a adminAPIHandlers) PutBucketQuotaConfigHandler(w http.ResponseWriter, r *
func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketQuotaConfig")
defer logger.AuditLog(w, r, "GetBucketQuotaConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetBucketQuotaAdminAction)
if objectAPI == nil {
@@ -124,7 +116,7 @@ func (a adminAPIHandlers) GetBucketQuotaConfigHandler(w http.ResponseWriter, r *
func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetBucketTarget")
defer logger.AuditLog(w, r, "SetBucketTarget", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
update := r.URL.Query().Get("update") == "true"
@@ -164,7 +156,6 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
sameTarget, _ := isLocalHost(target.URL().Hostname(), target.URL().Port(), globalMinioPort)
if sameTarget && bucket == target.TargetBucket {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBucketRemoteIdenticalToSource), r.URL)
@@ -181,7 +172,12 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
}
if err = globalBucketTargetSys.SetTarget(ctx, bucket, &target, update); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
switch err.(type) {
case BucketRemoteConnectionErr:
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrReplicationRemoteConnectionError, err), r.URL)
default:
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
}
return
}
targets, err := globalBucketTargetSys.ListBucketTargets(ctx, bucket)
@@ -194,7 +190,6 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErrWithErr(ErrAdminConfigBadJSON, err), r.URL)
return
}
if err = globalBucketMetadataSys.Update(bucket, bucketTargetsFile, tgtBytes); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
@@ -214,7 +209,7 @@ func (a adminAPIHandlers) SetRemoteTargetHandler(w http.ResponseWriter, r *http.
func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBucketTargets")
defer logger.AuditLog(w, r, "ListBucketTargets", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
arnType := vars["type"]
@@ -253,7 +248,7 @@ func (a adminAPIHandlers) ListRemoteTargetsHandler(w http.ResponseWriter, r *htt
func (a adminAPIHandlers) RemoveRemoteTargetHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveBucketTarget")
defer logger.AuditLog(w, r, "RemoveBucketTarget", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
arn := vars["arn"]

View File

@@ -49,7 +49,7 @@ func validateAdminReqConfigKV(ctx context.Context, w http.ResponseWriter, r *htt
}
// Validate request signature.
cred, adminAPIErr := checkAdminRequestAuthType(ctx, r, iampolicy.ConfigUpdateAdminAction, "")
cred, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ConfigUpdateAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return cred, nil
@@ -62,7 +62,7 @@ func validateAdminReqConfigKV(ctx context.Context, w http.ResponseWriter, r *htt
func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteConfigKV")
defer logger.AuditLog(w, r, "DeleteConfigKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -104,7 +104,7 @@ func (a adminAPIHandlers) DelConfigKVHandler(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfigKV")
defer logger.AuditLog(w, r, "SetConfigKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -130,13 +130,14 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
dynamic, err := cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
if err = validateConfig(cfg, objectAPI.SetDriveCount()); err != nil {
if err = validateConfig(cfg, objectAPI.SetDriveCounts()); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -158,15 +159,14 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
saveConfig(GlobalContext, objectAPI, backendEncryptedFile, backendEncryptedMigrationComplete)
}
// Apply dynamic values.
if err := applyDynamicConfig(GlobalContext, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
globalNotificationSys.SignalService(serviceReloadDynamic)
// If all values were dynamic, tell the client.
if dynamic {
// Apply dynamic values.
if err := applyDynamicConfig(GlobalContext, objectAPI, cfg); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
globalNotificationSys.SignalService(serviceReloadDynamic)
// If all values were dynamic, tell the client.
w.Header().Set(madmin.ConfigAppliedHeader, madmin.ConfigAppliedTrue)
}
writeSuccessResponseHeadersOnly(w)
@@ -176,7 +176,7 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfigKV")
defer logger.AuditLog(w, r, "GetConfigKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -214,7 +214,7 @@ func (a adminAPIHandlers) GetConfigKVHandler(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ClearConfigHistoryKV")
defer logger.AuditLog(w, r, "ClearConfigHistoryKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -251,7 +251,7 @@ func (a adminAPIHandlers) ClearConfigHistoryKVHandler(w http.ResponseWriter, r *
func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RestoreConfigHistoryKV")
defer logger.AuditLog(w, r, "RestoreConfigHistoryKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -282,7 +282,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
return
}
if err = validateConfig(cfg, objectAPI.SetDriveCount()); err != nil {
if err = validateConfig(cfg, objectAPI.SetDriveCounts()); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -299,7 +299,7 @@ func (a adminAPIHandlers) RestoreConfigHistoryKVHandler(w http.ResponseWriter, r
func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListConfigHistoryKV")
defer logger.AuditLog(w, r, "ListConfigHistoryKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -339,7 +339,7 @@ func (a adminAPIHandlers) ListConfigHistoryKVHandler(w http.ResponseWriter, r *h
func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HelpConfigKV")
defer logger.AuditLog(w, r, "HelpHistoryKV", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
_, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -367,7 +367,7 @@ func (a adminAPIHandlers) HelpConfigKVHandler(w http.ResponseWriter, r *http.Req
func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetConfig")
defer logger.AuditLog(w, r, "SetConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {
@@ -394,7 +394,7 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
return
}
if err = validateConfig(cfg, objectAPI.SetDriveCount()); err != nil {
if err = validateConfig(cfg, objectAPI.SetDriveCounts()); err != nil {
writeCustomErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigBadJSON), err.Error(), r.URL)
return
}
@@ -424,7 +424,7 @@ func (a adminAPIHandlers) SetConfigHandler(w http.ResponseWriter, r *http.Reques
func (a adminAPIHandlers) GetConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetConfig")
defer logger.AuditLog(w, r, "GetConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
cred, objectAPI := validateAdminReqConfigKV(ctx, w, r)
if objectAPI == nil {

View File

@@ -19,11 +19,15 @@ package cmd
import (
"context"
"encoding/json"
"errors"
"io"
"io/ioutil"
"net/http"
"path"
"sort"
"github.com/gorilla/mux"
"github.com/minio/minio/cmd/config/dns"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/auth"
iampolicy "github.com/minio/minio/pkg/iam/policy"
@@ -42,7 +46,7 @@ func validateAdminUsersReq(ctx context.Context, w http.ResponseWriter, r *http.R
}
// Validate request signature.
cred, adminAPIErr = checkAdminRequestAuthType(ctx, r, action, "")
cred, adminAPIErr = checkAdminRequestAuth(ctx, r, action, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return nil, cred
@@ -55,7 +59,7 @@ func validateAdminUsersReq(ctx context.Context, w http.ResponseWriter, r *http.R
func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveUser")
defer logger.AuditLog(w, r, "RemoveUser", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.DeleteUserAdminAction)
if objectAPI == nil {
@@ -65,7 +69,7 @@ func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
accessKey := vars["accessKey"]
ok, err := globalIAMSys.IsTempUser(accessKey)
ok, _, err := globalIAMSys.IsTempUser(accessKey)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -93,7 +97,7 @@ func (a adminAPIHandlers) RemoveUser(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListUsers")
defer logger.AuditLog(w, r, "ListUsers", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, cred := validateAdminUsersReq(ctx, w, r, iampolicy.ListUsersAdminAction)
if objectAPI == nil {
@@ -127,7 +131,7 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetUserInfo")
defer logger.AuditLog(w, r, "GetUserInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
name := vars["accessKey"]
@@ -154,6 +158,7 @@ func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
if !implicitPerm {
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: accessKey,
Groups: cred.Groups,
Action: iampolicy.GetUserAdminAction,
ConditionValues: getConditionValues(r, "", accessKey, claims),
IsOwner: owner,
@@ -183,7 +188,7 @@ func (a adminAPIHandlers) GetUserInfo(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) UpdateGroupMembers(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "UpdateGroupMembers")
defer logger.AuditLog(w, r, "UpdateGroupMembers", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.AddUserToGroupAdminAction)
if objectAPI == nil {
@@ -228,7 +233,7 @@ func (a adminAPIHandlers) UpdateGroupMembers(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) GetGroup(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetGroup")
defer logger.AuditLog(w, r, "GetGroup", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetGroupAdminAction)
if objectAPI == nil {
@@ -257,7 +262,7 @@ func (a adminAPIHandlers) GetGroup(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) ListGroups(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListGroups")
defer logger.AuditLog(w, r, "ListGroups", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListGroupsAdminAction)
if objectAPI == nil {
@@ -283,7 +288,7 @@ func (a adminAPIHandlers) ListGroups(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) SetGroupStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetGroupStatus")
defer logger.AuditLog(w, r, "SetGroupStatus", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.EnableGroupAdminAction)
if objectAPI == nil {
@@ -320,7 +325,7 @@ func (a adminAPIHandlers) SetGroupStatus(w http.ResponseWriter, r *http.Request)
func (a adminAPIHandlers) SetUserStatus(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetUserStatus")
defer logger.AuditLog(w, r, "SetUserStatus", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.EnableUserAdminAction)
if objectAPI == nil {
@@ -355,10 +360,10 @@ func (a adminAPIHandlers) SetUserStatus(w http.ResponseWriter, r *http.Request)
func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddUser")
defer logger.AuditLog(w, r, "AddUser", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
accessKey := vars["accessKey"]
accessKey := path.Clean(vars["accessKey"])
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -373,23 +378,30 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
return
}
if cred.IsTemp() || cred.IsServiceAccount() {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccountNotEligible), r.URL)
return
}
// Not allowed to add a user with same access key as root credential
if owner && accessKey == cred.AccessKey {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAddUserInvalidArgument), r.URL)
return
}
if (cred.IsTemp() || cred.IsServiceAccount()) && cred.ParentUser == accessKey {
// Incoming access key matches parent user then we should
// reject password change requests.
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAddUserInvalidArgument), r.URL)
return
}
implicitPerm := accessKey == cred.AccessKey
if !implicitPerm {
parentUser := cred.ParentUser
if parentUser == "" {
parentUser = cred.AccessKey
}
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
AccountName: parentUser,
Groups: cred.Groups,
Action: iampolicy.CreateUserAdminAction,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
ConditionValues: getConditionValues(r, "", parentUser, claims),
IsOwner: owner,
Claims: claims,
}) {
@@ -398,6 +410,19 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
}
}
if implicitPerm && !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: accessKey,
Groups: cred.Groups,
Action: iampolicy.CreateUserAdminAction,
ConditionValues: getConditionValues(r, "", accessKey, claims),
IsOwner: owner,
Claims: claims,
DenyOnly: true, // check if changing password is explicitly denied.
}) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL)
return
}
if r.ContentLength > maxEConfigJSONSize || r.ContentLength == -1 {
// More than maxConfigSize bytes were available
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminConfigTooLarge), r.URL)
@@ -419,7 +444,7 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
return
}
if err = globalIAMSys.SetUser(accessKey, uinfo); err != nil {
if err = globalIAMSys.CreateUser(accessKey, uinfo); err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
@@ -437,7 +462,7 @@ func (a adminAPIHandlers) AddUser(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddServiceAccount")
defer logger.AuditLog(w, r, "AddServiceAccount", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -476,7 +501,7 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
parentUser = cred.ParentUser
}
newCred, err := globalIAMSys.NewServiceAccount(ctx, parentUser, createReq.Policy)
newCred, err := globalIAMSys.NewServiceAccount(ctx, parentUser, cred.Groups, createReq.Policy)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -516,7 +541,7 @@ func (a adminAPIHandlers) AddServiceAccount(w http.ResponseWriter, r *http.Reque
func (a adminAPIHandlers) ListServiceAccounts(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListServiceAccounts")
defer logger.AuditLog(w, r, "ListServiceAccounts", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -571,7 +596,7 @@ func (a adminAPIHandlers) ListServiceAccounts(w http.ResponseWriter, r *http.Req
func (a adminAPIHandlers) DeleteServiceAccount(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteServiceAccount")
defer logger.AuditLog(w, r, "DeleteServiceAccount", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -630,7 +655,7 @@ func (a adminAPIHandlers) DeleteServiceAccount(w http.ResponseWriter, r *http.Re
func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AccountInfo")
defer logger.AuditLog(w, r, "AccountInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Get current object layer instance.
objectAPI := newObjectLayerFn()
@@ -656,6 +681,7 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
// https://github.com/golang/go/wiki/SliceTricks#filter-in-place
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
@@ -668,6 +694,7 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectAction,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
@@ -681,12 +708,6 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
return rd, wr
}
buckets, err := objectAPI.ListBuckets(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
// Load the latest calculated data usage
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err != nil {
@@ -694,12 +715,47 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
logger.LogIf(ctx, err)
}
accountName := cred.AccessKey
if cred.ParentUser != "" {
accountName = cred.ParentUser
// If etcd, dns federation configured list buckets from etcd.
var buckets []BucketInfo
if globalDNSConfig != nil && globalBucketFederation {
dnsBuckets, err := globalDNSConfig.List()
if err != nil && !IsErrIgnored(err,
dns.ErrNoEntriesFound,
dns.ErrDomainMissing) {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
for _, dnsRecords := range dnsBuckets {
buckets = append(buckets, BucketInfo{
Name: dnsRecords[0].Key,
Created: dnsRecords[0].CreationDate,
})
}
sort.Slice(buckets, func(i, j int) bool {
return buckets[i].Name < buckets[j].Name
})
} else {
buckets, err = objectAPI.ListBuckets(ctx)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
}
}
policies, err := globalIAMSys.PolicyDBGet(accountName, false)
accountName := cred.AccessKey
var policies []string
switch globalIAMSys.usersSysType {
case MinIOUsersSysType:
policies, err = globalIAMSys.PolicyDBGet(accountName, false)
case LDAPUsersSysType:
parentUser := accountName
if cred.ParentUser != "" {
parentUser = cred.ParentUser
}
policies, err = globalIAMSys.PolicyDBGet(parentUser, false, cred.Groups...)
default:
err = errors.New("should not happen!")
}
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@@ -744,7 +800,7 @@ func (a adminAPIHandlers) AccountInfoHandler(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) InfoCannedPolicyV2(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InfoCannedPolicyV2")
defer logger.AuditLog(w, r, "InfoCannedPolicyV2", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetPolicyAdminAction)
if objectAPI == nil {
@@ -771,7 +827,7 @@ func (a adminAPIHandlers) InfoCannedPolicyV2(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "InfoCannedPolicy")
defer logger.AuditLog(w, r, "InfoCannedPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.GetPolicyAdminAction)
if objectAPI == nil {
@@ -795,7 +851,7 @@ func (a adminAPIHandlers) InfoCannedPolicy(w http.ResponseWriter, r *http.Reques
func (a adminAPIHandlers) ListCannedPoliciesV2(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListCannedPoliciesV2")
defer logger.AuditLog(w, r, "ListCannedPoliciesV2", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListUserPoliciesAdminAction)
if objectAPI == nil {
@@ -829,7 +885,7 @@ func (a adminAPIHandlers) ListCannedPoliciesV2(w http.ResponseWriter, r *http.Re
func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListCannedPolicies")
defer logger.AuditLog(w, r, "ListCannedPolicies", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.ListUserPoliciesAdminAction)
if objectAPI == nil {
@@ -863,7 +919,7 @@ func (a adminAPIHandlers) ListCannedPolicies(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "RemoveCannedPolicy")
defer logger.AuditLog(w, r, "RemoveCannedPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.DeletePolicyAdminAction)
if objectAPI == nil {
@@ -891,7 +947,7 @@ func (a adminAPIHandlers) RemoveCannedPolicy(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "AddCannedPolicy")
defer logger.AuditLog(w, r, "AddCannedPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.CreatePolicyAdminAction)
if objectAPI == nil {
@@ -943,7 +999,7 @@ func (a adminAPIHandlers) AddCannedPolicy(w http.ResponseWriter, r *http.Request
func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "SetPolicyForUserOrGroup")
defer logger.AuditLog(w, r, "SetPolicyForUserOrGroup", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminUsersReq(ctx, w, r, iampolicy.AttachPolicyAdminAction)
if objectAPI == nil {
@@ -956,7 +1012,7 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
isGroup := vars["isGroup"] == "true"
if !isGroup {
ok, err := globalIAMSys.IsTempUser(entityName)
ok, _, err := globalIAMSys.IsTempUser(entityName)
if err != nil && err != errNoSuchUser {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return

View File

@@ -24,6 +24,7 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"os"
@@ -42,6 +43,7 @@ import (
"github.com/minio/minio/cmd/logger/message/log"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/bandwidth"
"github.com/minio/minio/pkg/dsync"
"github.com/minio/minio/pkg/handlers"
iampolicy "github.com/minio/minio/pkg/iam/policy"
"github.com/minio/minio/pkg/madmin"
@@ -78,7 +80,7 @@ func updateServer(u *url.URL, sha256Sum []byte, lrTime time.Time, mode string) (
func (a adminAPIHandlers) ServerUpdateHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ServerUpdate")
defer logger.AuditLog(w, r, "ServerUpdate", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerUpdateAdminAction)
if objectAPI == nil {
@@ -187,7 +189,7 @@ func (a adminAPIHandlers) ServerUpdateHandler(w http.ResponseWriter, r *http.Req
func (a adminAPIHandlers) ServiceHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Service")
defer logger.AuditLog(w, r, "Service", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
action := vars["action"]
@@ -258,9 +260,11 @@ type ServerHTTPAPIStats struct {
// ServerHTTPStats holds all type of http operations performed to/from the server
// including their average execution time.
type ServerHTTPStats struct {
S3RequestsInQueue int32 `json:"s3RequestsInQueue"`
CurrentS3Requests ServerHTTPAPIStats `json:"currentS3Requests"`
TotalS3Requests ServerHTTPAPIStats `json:"totalS3Requests"`
TotalS3Errors ServerHTTPAPIStats `json:"totalS3Errors"`
TotalS3Canceled ServerHTTPAPIStats `json:"totalS3Canceled"`
}
// ServerInfoData holds storage, connections and other
@@ -284,7 +288,7 @@ type ServerInfo struct {
func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StorageInfo")
defer logger.AuditLog(w, r, "StorageInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.StorageInfoAdminAction)
if objectAPI == nil {
@@ -292,10 +296,10 @@ func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Requ
}
// ignores any errors here.
storageInfo, _ := objectAPI.StorageInfo(ctx, false)
storageInfo, _ := objectAPI.StorageInfo(ctx)
// Collect any disk healing.
healing, _ := getAggregatedBackgroundHealState(ctx)
healing, _ := getAggregatedBackgroundHealState(ctx, nil)
healDisks := make(map[string]struct{}, len(healing.HealDisks))
for _, disk := range healing.HealDisks {
healDisks[disk] = struct{}{}
@@ -327,7 +331,7 @@ func (a adminAPIHandlers) StorageInfoHandler(w http.ResponseWriter, r *http.Requ
func (a adminAPIHandlers) DataUsageInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DataUsageInfo")
defer logger.AuditLog(w, r, "DataUsageInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.DataUsageInfoAdminAction)
if objectAPI == nil {
@@ -403,11 +407,50 @@ type PeerLocks struct {
Locks map[string][]lockRequesterInfo
}
// ForceUnlockHandler force unlocks requested resource
func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ForceUnlock")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ForceUnlockAdminAction)
if objectAPI == nil {
return
}
z, ok := objectAPI.(*erasureServerPools)
if !ok {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
vars := mux.Vars(r)
var args dsync.LockArgs
lockersMap := make(map[string]dsync.NetLocker)
for _, path := range strings.Split(vars["paths"], ",") {
if path == "" {
continue
}
args.Resources = append(args.Resources, path)
lockers, _ := z.serverPools[0].getHashedSet(path).getLockers()
for _, locker := range lockers {
if locker != nil {
lockersMap[locker.String()] = locker
}
}
}
for _, locker := range lockersMap {
locker.ForceUnlock(ctx, args)
}
}
// TopLocksHandler Get list of locks in use
func (a adminAPIHandlers) TopLocksHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "TopLocks")
defer logger.AuditLog(w, r, "TopLocks", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.TopLocksAdminAction)
if objectAPI == nil {
@@ -459,10 +502,17 @@ type StartProfilingResult struct {
func (a adminAPIHandlers) StartProfilingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "StartProfiling")
defer logger.AuditLog(w, r, "StartProfiling", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ProfilingAdminAction)
if objectAPI == nil {
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ProfilingAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
if globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
@@ -557,10 +607,17 @@ func (f dummyFileInfo) Sys() interface{} { return f.sys }
func (a adminAPIHandlers) DownloadProfilingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DownloadProfiling")
defer logger.AuditLog(w, r, "DownloadProfiling", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ProfilingAdminAction)
if objectAPI == nil {
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ProfilingAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
if globalNotificationSys == nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrServerNotInitialized), r.URL)
return
}
@@ -651,7 +708,7 @@ func extractHealInitParams(vars map[string]string, qParms url.Values, r io.Reade
func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "Heal")
defer logger.AuditLog(w, r, "Heal", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealAdminAction)
if objectAPI == nil {
@@ -806,16 +863,14 @@ func (a adminAPIHandlers) HealHandler(w http.ResponseWriter, r *http.Request) {
keepConnLive(w, r, respCh)
}
func getAggregatedBackgroundHealState(ctx context.Context) (madmin.BgHealState, error) {
var bgHealStates []madmin.BgHealState
localHealState, ok := getLocalBackgroundHealStatus()
if !ok {
return madmin.BgHealState{}, errServerNotInitialized
}
// getAggregatedBackgroundHealState returns the heal state of disks.
// If no ObjectLayer is provided no set status is returned.
func getAggregatedBackgroundHealState(ctx context.Context, o ObjectLayer) (madmin.BgHealState, error) {
// Get local heal status first
bgHealStates = append(bgHealStates, localHealState)
bgHealStates, ok := getBackgroundHealStatus(ctx, o)
if !ok {
return bgHealStates, errServerNotInitialized
}
if globalIsDistErasure {
// Get heal status from other peers
@@ -830,39 +885,16 @@ func getAggregatedBackgroundHealState(ctx context.Context) (madmin.BgHealState,
if errCount == len(nerrs) {
return madmin.BgHealState{}, fmt.Errorf("all remote servers failed to report heal status, cluster is unhealthy")
}
bgHealStates = append(bgHealStates, peersHealStates...)
bgHealStates.Merge(peersHealStates...)
}
// Aggregate healing result
var aggregatedHealStateResult = madmin.BgHealState{
ScannedItemsCount: bgHealStates[0].ScannedItemsCount,
LastHealActivity: bgHealStates[0].LastHealActivity,
NextHealRound: bgHealStates[0].NextHealRound,
HealDisks: bgHealStates[0].HealDisks,
}
bgHealStates = bgHealStates[1:]
for _, state := range bgHealStates {
aggregatedHealStateResult.ScannedItemsCount += state.ScannedItemsCount
aggregatedHealStateResult.HealDisks = append(aggregatedHealStateResult.HealDisks, state.HealDisks...)
if !state.LastHealActivity.IsZero() && aggregatedHealStateResult.LastHealActivity.Before(state.LastHealActivity) {
aggregatedHealStateResult.LastHealActivity = state.LastHealActivity
// The node which has the last heal activity means its
// is the node that is orchestrating self healing operations,
// which also means it is the same node which decides when
// the next self healing operation will be done.
aggregatedHealStateResult.NextHealRound = state.NextHealRound
}
}
return aggregatedHealStateResult, nil
return bgHealStates, nil
}
func (a adminAPIHandlers) BackgroundHealStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HealBackgroundStatus")
defer logger.AuditLog(w, r, "HealBackgroundStatus", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealAdminAction)
if objectAPI == nil {
@@ -875,7 +907,7 @@ func (a adminAPIHandlers) BackgroundHealStatusHandler(w http.ResponseWriter, r *
return
}
aggregateHealStateResult, err := getAggregatedBackgroundHealState(r.Context())
aggregateHealStateResult, err := getAggregatedBackgroundHealState(r.Context(), objectAPI)
if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return
@@ -900,7 +932,7 @@ func validateAdminReq(ctx context.Context, w http.ResponseWriter, r *http.Reques
}
// Validate request signature.
cred, adminAPIErr = checkAdminRequestAuthType(ctx, r, action, "")
cred, adminAPIErr = checkAdminRequestAuth(ctx, r, action, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return nil, cred
@@ -1032,7 +1064,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
trcErr := r.URL.Query().Get("err") == "true"
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuthType(ctx, r, iampolicy.TraceAdminAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.TraceAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
@@ -1083,7 +1115,7 @@ func (a adminAPIHandlers) TraceHandler(w http.ResponseWriter, r *http.Request) {
func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ConsoleLog")
defer logger.AuditLog(w, r, "ConsoleLog", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ConsoleLogAdminAction)
if objectAPI == nil {
@@ -1154,7 +1186,7 @@ func (a adminAPIHandlers) ConsoleLogHandler(w http.ResponseWriter, r *http.Reque
// KMSCreateKeyHandler - POST /minio/admin/v3/kms/key/create?key-id=<master-key-id>
func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "KMSCreateKey")
defer logger.AuditLog(w, r, "KMSCreateKey", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.KMSCreateKeyAdminAction)
if objectAPI == nil {
@@ -1177,7 +1209,7 @@ func (a adminAPIHandlers) KMSCreateKeyHandler(w http.ResponseWriter, r *http.Req
func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "KMSKeyStatus")
defer logger.AuditLog(w, r, "KMSKeyStatus", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.KMSKeyStatusAdminAction)
if objectAPI == nil {
@@ -1250,7 +1282,7 @@ func (a adminAPIHandlers) KMSKeyStatusHandler(w http.ResponseWriter, r *http.Req
func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HealthInfo")
defer logger.AuditLog(w, r, "HealthInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.HealthInfoAdminAction)
if objectAPI == nil {
@@ -1293,8 +1325,10 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
deadlinedCtx, cancel := context.WithTimeout(ctx, deadline)
defer cancel()
var err error
nsLock := objectAPI.NewNSLock(minioMetaBucket, "health-check-in-progress")
if err := nsLock.GetLock(ctx, newDynamicTimeout(deadline, deadline)); err != nil { // returns a locked lock
ctx, err = nsLock.GetLock(ctx, newDynamicTimeout(deadline, deadline))
if err != nil { // returns a locked lock
errResp(err)
return
}
@@ -1398,7 +1432,7 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
}()
ticker := time.NewTicker(30 * time.Second)
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for {
@@ -1428,39 +1462,42 @@ func (a adminAPIHandlers) HealthInfoHandler(w http.ResponseWriter, r *http.Reque
func (a adminAPIHandlers) BandwidthMonitorHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "BandwidthMonitor")
defer logger.AuditLog(w, r, "BandwidthMonitor", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuthType(ctx, r, iampolicy.BandwidthMonitorAction, "")
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.BandwidthMonitorAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
rnd := rand.New(rand.NewSource(time.Now().UnixNano()))
setEventStreamHeaders(w)
reportCh := make(chan bandwidth.Report, 1)
reportCh := make(chan bandwidth.Report)
keepAliveTicker := time.NewTicker(500 * time.Millisecond)
defer keepAliveTicker.Stop()
bucketsRequestedString := r.URL.Query().Get("buckets")
bucketsRequested := strings.Split(bucketsRequestedString, ",")
go func() {
defer close(reportCh)
for {
reportCh <- globalNotificationSys.GetBandwidthReports(ctx, bucketsRequested...)
select {
case <-ctx.Done():
return
default:
time.Sleep(2 * time.Second)
case reportCh <- globalNotificationSys.GetBandwidthReports(ctx, bucketsRequested...):
time.Sleep(time.Duration(rnd.Float64() * float64(2*time.Second)))
}
}
}()
for {
select {
case report := <-reportCh:
enc := json.NewEncoder(w)
err := enc.Encode(report)
if err != nil {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrInternalError), r.URL)
case report, ok := <-reportCh:
if !ok {
return
}
if err := json.NewEncoder(w).Encode(report); err != nil {
writeErrorResponseJSON(ctx, w, toAPIError(ctx, err), r.URL)
return
}
w.(http.Flusher).Flush()
@@ -1481,37 +1518,28 @@ func (a adminAPIHandlers) BandwidthMonitorHandler(w http.ResponseWriter, r *http
func (a adminAPIHandlers) ServerInfoHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ServerInfo")
defer logger.AuditLog(w, r, "ServerInfo", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI, _ := validateAdminReq(ctx, w, r, iampolicy.ServerInfoAdminAction)
if objectAPI == nil {
// Validate request signature.
_, adminAPIErr := checkAdminRequestAuth(ctx, r, iampolicy.ServerInfoAdminAction, "")
if adminAPIErr != ErrNone {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(adminAPIErr), r.URL)
return
}
buckets := madmin.Buckets{}
objects := madmin.Objects{}
usage := madmin.Usage{}
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err == nil {
buckets = madmin.Buckets{Count: dataUsageInfo.BucketsCount}
objects = madmin.Objects{Count: dataUsageInfo.ObjectsTotalCount}
usage = madmin.Usage{Size: dataUsageInfo.ObjectsTotalSize}
}
vault := fetchVaultStatus()
kmsStat := fetchKMSStatus()
ldap := madmin.LDAP{}
if globalLDAPConfig.Enabled {
ldapConn, err := globalLDAPConfig.Connect()
if err != nil {
ldap.Status = "offline"
ldap.Status = string(madmin.ItemOffline)
} else if ldapConn == nil {
ldap.Status = "Not Configured"
} else {
// Close ldap connection to avoid leaks.
ldapConn.Close()
ldap.Status = "online"
ldap.Status = string(madmin.ItemOnline)
}
}
@@ -1520,61 +1548,70 @@ func (a adminAPIHandlers) ServerInfoHandler(w http.ResponseWriter, r *http.Reque
// Get the notification target info
notifyTarget := fetchLambdaInfo()
// Fetching the Storage information, ignore any errors.
storageInfo, _ := objectAPI.StorageInfo(ctx, false)
local := getLocalServerProperty(globalEndpoints, r)
servers := globalNotificationSys.ServerInfo()
servers = append(servers, local)
assignPoolNumbers(servers)
var backend interface{}
if storageInfo.Backend.Type == BackendType(madmin.Erasure) {
backend = madmin.ErasureBackend{
Type: madmin.ErasureType,
OnlineDisks: storageInfo.Backend.OnlineDisks.Sum(),
OfflineDisks: storageInfo.Backend.OfflineDisks.Sum(),
StandardSCData: storageInfo.Backend.StandardSCData,
StandardSCParity: storageInfo.Backend.StandardSCParity,
RRSCData: storageInfo.Backend.RRSCData,
RRSCParity: storageInfo.Backend.RRSCParity,
mode := madmin.ItemInitializing
buckets := madmin.Buckets{}
objects := madmin.Objects{}
usage := madmin.Usage{}
objectAPI := newObjectLayerFn()
if objectAPI != nil {
mode = madmin.ItemOnline
// Load data usage
dataUsageInfo, err := loadDataUsageFromBackend(ctx, objectAPI)
if err == nil {
buckets = madmin.Buckets{Count: dataUsageInfo.BucketsCount}
objects = madmin.Objects{Count: dataUsageInfo.ObjectsTotalCount}
usage = madmin.Usage{Size: dataUsageInfo.ObjectsTotalSize}
} else {
buckets = madmin.Buckets{Error: err.Error()}
objects = madmin.Objects{Error: err.Error()}
usage = madmin.Usage{Error: err.Error()}
}
} else {
backend = madmin.FSBackend{
Type: madmin.FsType,
// Fetching the backend information
backendInfo := objectAPI.BackendInfo()
if backendInfo.Type == madmin.Erasure {
// Calculate the number of online/offline disks of all nodes
var allDisks []madmin.Disk
for _, s := range servers {
allDisks = append(allDisks, s.Disks...)
}
onlineDisks, offlineDisks := getOnlineOfflineDisksStats(allDisks)
backend = madmin.ErasureBackend{
Type: madmin.ErasureType,
OnlineDisks: onlineDisks.Sum(),
OfflineDisks: offlineDisks.Sum(),
StandardSCParity: backendInfo.StandardSCParity,
RRSCParity: backendInfo.RRSCParity,
}
} else {
backend = madmin.FSBackend{
Type: madmin.FsType,
}
}
}
mode := "online"
server := getLocalServerProperty(globalEndpoints, r)
servers := globalNotificationSys.ServerInfo()
servers = append(servers, server)
domain := globalDomainNames
services := madmin.Services{
Vault: vault,
KMS: kmsStat,
LDAP: ldap,
Logger: log,
Audit: audit,
Notifications: notifyTarget,
}
// find all disks which belong to each respective endpoints
for i := range servers {
for _, disk := range storageInfo.Disks {
if strings.Contains(disk.Endpoint, servers[i].Endpoint) {
servers[i].Disks = append(servers[i].Disks, disk)
}
}
}
// add all the disks local to this server.
for _, disk := range storageInfo.Disks {
if disk.DrivePath == "" && disk.Endpoint == "" {
continue
}
if disk.Endpoint == disk.DrivePath {
servers[len(servers)-1].Disks = append(servers[len(servers)-1].Disks, disk)
}
}
infoMsg := madmin.InfoMessage{
Mode: mode,
Mode: string(mode),
Domain: domain,
Region: globalServerRegion,
SQSARN: globalNotificationSys.GetARNList(false),
@@ -1599,6 +1636,22 @@ func (a adminAPIHandlers) ServerInfoHandler(w http.ResponseWriter, r *http.Reque
writeSuccessResponseJSON(w, jsonBytes)
}
func assignPoolNumbers(servers []madmin.ServerProperties) {
for i := range servers {
for idx, ge := range globalEndpoints {
for _, endpoint := range ge.Endpoints {
if servers[i].Endpoint == endpoint.Host {
servers[i].PoolNumber = idx + 1
} else if host, err := xnet.ParseHost(servers[i].Endpoint); err == nil {
if host.Name == endpoint.Hostname() {
servers[i].PoolNumber = idx + 1
}
}
}
}
}
}
func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
lambdaMap := make(map[string][]madmin.TargetIDStatus)
@@ -1608,9 +1661,9 @@ func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
active, _ := tgt.IsActive()
targetID := tgt.ID()
if active {
targetIDStatus[targetID.ID] = madmin.Status{Status: "Online"}
targetIDStatus[targetID.ID] = madmin.Status{Status: string(madmin.ItemOnline)}
} else {
targetIDStatus[targetID.ID] = madmin.Status{Status: "Offline"}
targetIDStatus[targetID.ID] = madmin.Status{Status: string(madmin.ItemOffline)}
}
list := lambdaMap[targetID.Name]
list = append(list, targetIDStatus)
@@ -1622,9 +1675,9 @@ func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
active, _ := tgt.IsActive()
targetID := tgt.ID()
if active {
targetIDStatus[targetID.ID] = madmin.Status{Status: "Online"}
targetIDStatus[targetID.ID] = madmin.Status{Status: string(madmin.ItemOnline)}
} else {
targetIDStatus[targetID.ID] = madmin.Status{Status: "Offline"}
targetIDStatus[targetID.ID] = madmin.Status{Status: string(madmin.ItemOffline)}
}
list := lambdaMap[targetID.Name]
list = append(list, targetIDStatus)
@@ -1642,47 +1695,46 @@ func fetchLambdaInfo() []map[string][]madmin.TargetIDStatus {
return notify
}
// fetchVaultStatus fetches Vault Info
func fetchVaultStatus() madmin.Vault {
vault := madmin.Vault{}
// fetchKMSStatus fetches KMS-related status information.
func fetchKMSStatus() madmin.KMS {
kmsStat := madmin.KMS{}
if GlobalKMS == nil {
vault.Status = "disabled"
return vault
kmsStat.Status = "disabled"
return kmsStat
}
keyID := GlobalKMS.DefaultKeyID()
kmsInfo := GlobalKMS.Info()
if len(kmsInfo.Endpoints) == 0 {
vault.Status = "KMS configured using master key"
return vault
kmsStat.Status = "KMS configured using master key"
return kmsStat
}
if err := checkConnection(kmsInfo.Endpoints[0], 15*time.Second); err != nil {
vault.Status = "offline"
kmsStat.Status = string(madmin.ItemOffline)
} else {
vault.Status = "online"
kmsStat.Status = string(madmin.ItemOnline)
kmsContext := crypto.Context{"MinIO admin API": "ServerInfoHandler"} // Context for a test key operation
// 1. Generate a new key using the KMS.
key, sealedKey, err := GlobalKMS.GenerateKey(keyID, kmsContext)
if err != nil {
vault.Encrypt = fmt.Sprintf("Encryption failed: %v", err)
kmsStat.Encrypt = fmt.Sprintf("Encryption failed: %v", err)
} else {
vault.Encrypt = "Ok"
kmsStat.Encrypt = "success"
}
// 2. Verify that we can indeed decrypt the (encrypted) key
decryptedKey, err := GlobalKMS.UnsealKey(keyID, sealedKey, kmsContext)
switch {
case err != nil:
vault.Decrypt = fmt.Sprintf("Decryption failed: %v", err)
kmsStat.Decrypt = fmt.Sprintf("Decryption failed: %v", err)
case subtle.ConstantTimeCompare(key[:], decryptedKey[:]) != 1:
vault.Decrypt = "Decryption failed: decrypted key does not match generated key"
kmsStat.Decrypt = "Decryption failed: decrypted key does not match generated key"
default:
vault.Decrypt = "Ok"
kmsStat.Decrypt = "success"
}
}
return vault
return kmsStat
}
// fetchLoggerDetails return log info
@@ -1695,11 +1747,11 @@ func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
err := checkConnection(target.Endpoint(), 15*time.Second)
if err == nil {
mapLog := make(map[string]madmin.Status)
mapLog[tgt] = madmin.Status{Status: "Online"}
mapLog[tgt] = madmin.Status{Status: string(madmin.ItemOnline)}
loggerInfo = append(loggerInfo, mapLog)
} else {
mapLog := make(map[string]madmin.Status)
mapLog[tgt] = madmin.Status{Status: "offline"}
mapLog[tgt] = madmin.Status{Status: string(madmin.ItemOffline)}
loggerInfo = append(loggerInfo, mapLog)
}
}
@@ -1711,11 +1763,11 @@ func fetchLoggerInfo() ([]madmin.Logger, []madmin.Audit) {
err := checkConnection(target.Endpoint(), 15*time.Second)
if err == nil {
mapAudit := make(map[string]madmin.Status)
mapAudit[tgt] = madmin.Status{Status: "Online"}
mapAudit[tgt] = madmin.Status{Status: string(madmin.ItemOnline)}
auditloggerInfo = append(auditloggerInfo, mapAudit)
} else {
mapAudit := make(map[string]madmin.Status)
mapAudit[tgt] = madmin.Status{Status: "Offline"}
mapAudit[tgt] = madmin.Status{Status: string(madmin.ItemOffline)}
auditloggerInfo = append(auditloggerInfo, mapAudit)
}
}

View File

@@ -66,7 +66,7 @@ func prepareAdminErasureTestBed(ctx context.Context) (*adminErasureTestBed, erro
// Initialize boot time
globalBootTime = UTCNow()
globalEndpoints = mustGetZoneEndpoints(erasureDirs...)
globalEndpoints = mustGetPoolEndpoints(erasureDirs...)
newAllSubsystems()
@@ -97,16 +97,9 @@ func initTestErasureObjLayer(ctx context.Context) (ObjectLayer, []string, error)
if err != nil {
return nil, nil, err
}
endpoints := mustGetNewEndpoints(erasureDirs...)
storageDisks, format, err := waitForFormatErasure(true, endpoints, 1, 1, 16, "")
if err != nil {
removeRoots(erasureDirs)
return nil, nil, err
}
endpoints := mustGetPoolEndpoints(erasureDirs...)
globalPolicySys = NewPolicySys()
objLayer := &erasureServerPools{serverPools: make([]*erasureSets, 1)}
objLayer.serverPools[0], err = newErasureSets(ctx, endpoints, storageDisks, format)
objLayer, err := newErasureServerPools(ctx, endpoints)
if err != nil {
return nil, nil, err
}
@@ -364,7 +357,7 @@ func TestExtractHealInitParams(t *testing.T) {
// Test all combinations!
for pIdx, parms := range qParmsArr {
for vIdx, vars := range varsArr {
_, err := extractHealInitParams(vars, parms, bytes.NewBuffer([]byte(body)))
_, err := extractHealInitParams(vars, parms, bytes.NewReader([]byte(body)))
isErrCase := false
if pIdx < 4 || vIdx < 1 {
isErrCase = true

View File

@@ -21,7 +21,7 @@ import (
"encoding/json"
"fmt"
"net/http"
"strings"
"sort"
"sync"
"time"
@@ -89,20 +89,22 @@ type allHealState struct {
sync.RWMutex
// map of heal path to heal sequence
healSeqMap map[string]*healSequence
healSeqMap map[string]*healSequence // Indexed by endpoint
healLocalDisks map[Endpoint]struct{}
healStatus map[string]healingTracker // Indexed by disk ID
}
// newHealState - initialize global heal state management
func newHealState() *allHealState {
healState := &allHealState{
func newHealState(cleanup bool) *allHealState {
hstate := &allHealState{
healSeqMap: make(map[string]*healSequence),
healLocalDisks: map[Endpoint]struct{}{},
healStatus: make(map[string]healingTracker),
}
go healState.periodicHealSeqsClean(GlobalContext)
return healState
if cleanup {
go hstate.periodicHealSeqsClean(GlobalContext)
}
return hstate
}
func (ahs *allHealState) healDriveCount() int {
@@ -112,7 +114,56 @@ func (ahs *allHealState) healDriveCount() int {
return len(ahs.healLocalDisks)
}
func (ahs *allHealState) getHealLocalDisks() Endpoints {
func (ahs *allHealState) popHealLocalDisks(healLocalDisks ...Endpoint) {
ahs.Lock()
defer ahs.Unlock()
for _, ep := range healLocalDisks {
delete(ahs.healLocalDisks, ep)
}
for id, disk := range ahs.healStatus {
for _, ep := range healLocalDisks {
if disk.Endpoint == ep.String() {
delete(ahs.healStatus, id)
}
}
}
}
// updateHealStatus will update the heal status.
func (ahs *allHealState) updateHealStatus(tracker *healingTracker) {
ahs.Lock()
defer ahs.Unlock()
ahs.healStatus[tracker.ID] = *tracker
}
// Sort by zone, set and disk index
func sortDisks(disks []madmin.Disk) {
sort.Slice(disks, func(i, j int) bool {
a, b := &disks[i], &disks[j]
if a.PoolIndex != b.PoolIndex {
return a.PoolIndex < b.PoolIndex
}
if a.SetIndex != b.SetIndex {
return a.SetIndex < b.SetIndex
}
return a.DiskIndex < b.DiskIndex
})
}
// getLocalHealingDisks returns local healing disks indexed by endpoint.
func (ahs *allHealState) getLocalHealingDisks() map[string]madmin.HealingDisk {
ahs.RLock()
defer ahs.RUnlock()
dst := make(map[string]madmin.HealingDisk, len(ahs.healStatus))
for _, v := range ahs.healStatus {
dst[v.Endpoint] = v.toHealingDisk()
}
return dst
}
func (ahs *allHealState) getHealLocalDiskEndpoints() Endpoints {
ahs.RLock()
defer ahs.RUnlock()
@@ -123,15 +174,6 @@ func (ahs *allHealState) getHealLocalDisks() Endpoints {
return endpoints
}
func (ahs *allHealState) popHealLocalDisks(healLocalDisks ...Endpoint) {
ahs.Lock()
defer ahs.Unlock()
for _, ep := range healLocalDisks {
delete(ahs.healLocalDisks, ep)
}
}
func (ahs *allHealState) pushHealLocalDisks(healLocalDisks ...Endpoint) {
ahs.Lock()
defer ahs.Unlock()
@@ -144,9 +186,13 @@ func (ahs *allHealState) pushHealLocalDisks(healLocalDisks ...Endpoint) {
func (ahs *allHealState) periodicHealSeqsClean(ctx context.Context) {
// Launch clean-up routine to remove this heal sequence (after
// it ends) from the global state after timeout has elapsed.
periodicTimer := time.NewTimer(time.Minute * 5)
defer periodicTimer.Stop()
for {
select {
case <-time.After(time.Minute * 5):
case <-periodicTimer.C:
periodicTimer.Reset(time.Minute * 5)
now := UTCNow()
ahs.Lock()
for path, h := range ahs.healSeqMap {
@@ -657,6 +703,13 @@ func (h *healSequence) healSequenceStart(objAPI ObjectLayer) {
}
}
func (h *healSequence) logHeal(healType madmin.HealItemType) {
h.mutex.Lock()
h.scannedItemsMap[healType]++
h.lastHealActivity = UTCNow()
h.mutex.Unlock()
}
func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItemType) error {
globalHealConfigMu.Lock()
opts := globalHealConfig
@@ -672,10 +725,9 @@ func (h *healSequence) queueHealTask(source healSource, healType madmin.HealItem
}
if source.opts != nil {
task.opts = *source.opts
} else {
if opts.Bitrot {
task.opts.ScanMode = madmin.HealDeepScan
}
}
if opts.Bitrot {
task.opts.ScanMode = madmin.HealDeepScan
}
// Wait and proceed if there are active requests
@@ -746,16 +798,19 @@ func (h *healSequence) healItemsFromSourceCh() error {
if !ok {
return nil
}
var itemType madmin.HealItemType
switch {
case source.bucket == nopHeal:
switch source.bucket {
case nopHeal:
continue
case source.bucket == SlashSeparator:
case SlashSeparator:
itemType = madmin.HealItemMetadata
case source.bucket != "" && source.object == "":
itemType = madmin.HealItemBucket
default:
itemType = madmin.HealItemObject
if source.object == "" {
itemType = madmin.HealItemBucket
} else {
itemType = madmin.HealItemObject
}
}
if err := h.queueHealTask(source, itemType); err != nil {
@@ -779,13 +834,18 @@ func (h *healSequence) healFromSourceCh() {
}
func (h *healSequence) healDiskMeta(objAPI ObjectLayer) error {
// Start healing the config prefix.
if err := h.healMinioSysMeta(objAPI, minioConfigPrefix)(); err != nil {
return err
// Try to pro-actively heal backend-encrypted file.
if err := h.queueHealTask(healSource{
bucket: minioMetaBucket,
object: backendEncryptedFile,
}, madmin.HealItemBucketMetadata); err != nil {
if !isErrObjectNotFound(err) && !isErrVersionNotFound(err) {
return err
}
}
// Start healing the bucket config prefix.
return h.healMinioSysMeta(objAPI, bucketConfigPrefix)()
// Start healing the config prefix.
return h.healMinioSysMeta(objAPI, minioConfigPrefix)()
}
func (h *healSequence) healItems(objAPI ObjectLayer, bucketsOnly bool) error {
@@ -822,11 +882,6 @@ func (h *healSequence) healMinioSysMeta(objAPI ObjectLayer, metaPrefix string) f
return errHealStopSignalled
}
// Skip metacache entries healing
if strings.HasPrefix(object, "buckets/.minio.sys/.metacache/") {
return nil
}
err := h.queueHealTask(healSource{
bucket: bucket,
object: object,
@@ -863,11 +918,16 @@ func (h *healSequence) healBuckets(objAPI ObjectLayer, bucketsOnly bool) error {
return h.healBucket(objAPI, h.bucket, bucketsOnly)
}
buckets, err := objAPI.ListBucketsHeal(h.ctx)
buckets, err := objAPI.ListBuckets(h.ctx)
if err != nil {
return errFnHealFromAPIErr(h.ctx, err)
}
// Heal latest buckets first.
sort.Slice(buckets, func(i, j int) bool {
return buckets[i].Created.After(buckets[j].Created)
})
for _, bucket := range buckets {
if err = h.healBucket(objAPI, bucket.Name, bucketsOnly); err != nil {
return err

View File

@@ -20,8 +20,6 @@ import (
"net/http"
"github.com/gorilla/mux"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/pkg/env"
"github.com/minio/minio/pkg/madmin"
)
@@ -112,10 +110,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps, enableIAMOps bool)
// -- IAM APIs --
// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.AddCannedPolicy)).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(httpTraceAll(adminAPI.AddCannedPolicy)).Queries("name", "{name:.*}")
// Add user IAM
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(httpTraceAll(adminAPI.AccountInfoHandler))
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(httpTraceHdrs(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
@@ -172,31 +169,31 @@ func registerAdminRouter(router *mux.Router, enableConfigOps, enableIAMOps bool)
}
if globalIsDistErasure || globalIsErasure {
// Quota operations
if env.Get(envDataUsageCrawlConf, config.EnableOn) == config.EnableOn {
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// GetBucketQuotaConfig
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/get-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.GetBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// PutBucketQuotaConfig
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-bucket-quota").HandlerFunc(
httpTraceHdrs(adminAPI.PutBucketQuotaConfigHandler)).Queries("bucket", "{bucket:.*}")
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
httpTraceHdrs(adminAPI.ListRemoteTargetsHandler)).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
httpTraceHdrs(adminAPI.SetRemoteTargetHandler)).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler)).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
}
// Bucket replication operations
// GetBucketTargetHandler
adminRouter.Methods(http.MethodGet).Path(adminVersion+"/list-remote-targets").HandlerFunc(
httpTraceHdrs(adminAPI.ListRemoteTargetsHandler)).Queries("bucket", "{bucket:.*}", "type", "{type:.*}")
// SetRemoteTargetHandler
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/set-remote-target").HandlerFunc(
httpTraceHdrs(adminAPI.SetRemoteTargetHandler)).Queries("bucket", "{bucket:.*}")
// RemoveRemoteTargetHandler
adminRouter.Methods(http.MethodDelete).Path(adminVersion+"/remove-remote-target").HandlerFunc(
httpTraceHdrs(adminAPI.RemoveRemoteTargetHandler)).Queries("bucket", "{bucket:.*}", "arn", "{arn:.*}")
}
// -- Top APIs --
// Top locks
if globalIsDistErasure {
// Top locks
adminRouter.Methods(http.MethodGet).Path(adminVersion + "/top/locks").HandlerFunc(httpTraceHdrs(adminAPI.TopLocksHandler))
// Force unlocks paths
adminRouter.Methods(http.MethodPost).Path(adminVersion+"/force-unlock").
Queries("paths", "{paths:.*}").HandlerFunc(httpTraceHdrs(adminAPI.ForceUnlockHandler))
}
// HTTP Trace

View File

@@ -17,14 +17,18 @@
package cmd
import (
"context"
"net/http"
"time"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/madmin"
)
// getLocalServerProperty - returns madmin.ServerProperties for only the
// local endpoints from given list of endpoints
func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Request) madmin.ServerProperties {
var localEndpoints Endpoints
addr := r.Host
if globalIsDistErasure {
addr = GetLocalPeer(endpointServerPools)
@@ -38,26 +42,39 @@ func getLocalServerProperty(endpointServerPools EndpointServerPools, r *http.Req
}
if endpoint.IsLocal {
// Only proceed for local endpoints
network[nodeName] = "online"
network[nodeName] = string(madmin.ItemOnline)
localEndpoints = append(localEndpoints, endpoint)
continue
}
_, present := network[nodeName]
if !present {
if err := IsServerResolvable(endpoint); err == nil {
network[nodeName] = "online"
if err := isServerResolvable(endpoint, 2*time.Second); err == nil {
network[nodeName] = string(madmin.ItemOnline)
} else {
network[nodeName] = "offline"
network[nodeName] = string(madmin.ItemOffline)
// log once the error
logger.LogOnceIf(context.Background(), err, nodeName)
}
}
}
}
return madmin.ServerProperties{
State: "ok",
props := madmin.ServerProperties{
State: string(madmin.ItemInitializing),
Endpoint: addr,
Uptime: UTCNow().Unix() - globalBootTime.Unix(),
Version: Version,
CommitID: CommitID,
Network: network,
}
objLayer := newObjectLayerFn()
if objLayer != nil && !globalIsGateway {
// only need Disks information in server mode.
storageInfo, _ := objLayer.LocalStorageInfo(GlobalContext)
props.State = string(madmin.ItemOnline)
props.Disks = storageInfo.Disks
}
return props
}

View File

@@ -87,6 +87,7 @@ const (
ErrInvalidMaxUploads
ErrInvalidMaxParts
ErrInvalidPartNumberMarker
ErrInvalidPartNumber
ErrInvalidRequestBody
ErrInvalidCopySource
ErrInvalidMetadataDirective
@@ -120,7 +121,6 @@ const (
ErrReplicationSourceNotVersionedError
ErrReplicationNeedsVersioningError
ErrReplicationBucketNeedsVersioningError
ErrBucketReplicationDisabledError
ErrObjectRestoreAlreadyInProgress
ErrNoSuchKey
ErrNoSuchUpload
@@ -263,7 +263,6 @@ const (
// Bucket Quota error codes
ErrAdminBucketQuotaExceeded
ErrAdminNoSuchQuotaConfiguration
ErrAdminBucketQuotaDisabled
ErrHealNotImplemented
ErrHealNoSuchProcess
@@ -439,6 +438,11 @@ var errorCodes = errorCodeMap{
Description: "Argument partNumberMarker must be an integer.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartNumber: {
Code: "InvalidPartNumber",
Description: "The requested partnumber is not satisfiable",
HTTPStatusCode: http.StatusRequestedRangeNotSatisfiable,
},
ErrInvalidPolicyDocument: {
Code: "InvalidPolicyDocument",
Description: "The content of the form does not meet the conditions specified in the policy document.",
@@ -889,11 +893,6 @@ var errorCodes = errorCodeMap{
Description: "Versioning must be 'Enabled' on the bucket to add a replication target",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketReplicationDisabledError: {
Code: "XMinioAdminBucketReplicationDisabled",
Description: "Replication specified but disk usage crawl is disabled on MinIO server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNoSuchObjectLockConfiguration: {
Code: "NoSuchObjectLockConfiguration",
Description: "The specified object does not have a ObjectLock configuration",
@@ -1215,11 +1214,6 @@ var errorCodes = errorCodeMap{
Description: "The quota configuration does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrAdminBucketQuotaDisabled: {
Code: "XMinioAdminBucketQuotaDisabled",
Description: "Quota specified but disk usage crawl is disabled on MinIO server",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInsecureClientRequest: {
Code: "XMinioInsecureClientRequest",
Description: "Cannot respond to plain-text request from TLS-encrypted server",
@@ -2131,6 +2125,12 @@ func toAPIError(ctx context.Context, err error) APIError {
HTTPStatusCode: e.Response().StatusCode,
}
// Add more Gateway SDKs here if any in future.
default:
apiErr = APIError{
Code: apiErr.Code,
Description: fmt.Sprintf("%s: cause(%v)", apiErr.Description, err),
HTTPStatusCode: apiErr.HTTPStatusCode,
}
}
}

View File

@@ -133,18 +133,20 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if strings.EqualFold(k, xhttp.AmzMetaUnencryptedContentLength) || strings.EqualFold(k, xhttp.AmzMetaUnencryptedContentMD5) {
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
var isSet bool
for _, userMetadataPrefix := range userMetadataKeyPrefixes {
if !strings.HasPrefix(k, userMetadataPrefix) {
if !strings.HasPrefix(strings.ToLower(k), strings.ToLower(userMetadataPrefix)) {
continue
}
w.Header()[strings.ToLower(k)] = []string{v}
isSet = true
break
}
if !isSet {
w.Header().Set(k, v)
}
@@ -156,16 +158,16 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
return err
}
if opts.PartNumber > 0 {
rs = partNumberToRangeSpec(objInfo, opts.PartNumber)
}
// For providing ranged content
start, rangeLen, err = rs.GetOffsetLength(totalObjectSize)
if err != nil {
return err
}
if rs == nil && opts.PartNumber > 0 {
rs = partNumberToRangeSpec(objInfo, opts.PartNumber)
}
// Set content length.
w.Header().Set(xhttp.ContentLength, strconv.FormatInt(rangeLen, 10))
if rs != nil {
@@ -177,21 +179,25 @@ func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, rs *HTTPRangeSp
if objInfo.VersionID != "" {
w.Header()[xhttp.AmzVersionID] = []string{objInfo.VersionID}
}
if objInfo.ReplicationStatus.String() != "" {
w.Header()[xhttp.AmzBucketReplicationStatus] = []string{objInfo.ReplicationStatus.String()}
}
if lc, err := globalLifecycleSys.Get(objInfo.Bucket); err == nil {
ruleID, expiryTime := lc.PredictExpiryTime(lifecycle.ObjectOpts{
Name: objInfo.Name,
UserTags: objInfo.UserTags,
VersionID: objInfo.VersionID,
ModTime: objInfo.ModTime,
IsLatest: objInfo.IsLatest,
DeleteMarker: objInfo.DeleteMarker,
})
if !expiryTime.IsZero() {
w.Header()[xhttp.AmzExpiration] = []string{
fmt.Sprintf(`expiry-date="%s", rule-id="%s"`, expiryTime.Format(http.TimeFormat), ruleID),
if opts.VersionID == "" {
if ruleID, expiryTime := lc.PredictExpiryTime(lifecycle.ObjectOpts{
Name: objInfo.Name,
UserTags: objInfo.UserTags,
VersionID: objInfo.VersionID,
ModTime: objInfo.ModTime,
IsLatest: objInfo.IsLatest,
DeleteMarker: objInfo.DeleteMarker,
SuccessorModTime: objInfo.SuccessorModTime,
}); !expiryTime.IsZero() {
w.Header()[xhttp.AmzExpiration] = []string{
fmt.Sprintf(`expiry-date="%s", rule-id="%s"`, expiryTime.Format(http.TimeFormat), ruleID),
}
}
}
if objInfo.TransitionStatus == lifecycle.TransitionComplete {

View File

@@ -36,8 +36,8 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
maxkeys = maxObjectList
}
prefix = values.Get("prefix")
marker = values.Get("marker")
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("marker"))
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
return
@@ -56,8 +56,8 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
maxkeys = maxObjectList
}
prefix = values.Get("prefix")
marker = values.Get("key-marker")
prefix = trimLeadingSlash(values.Get("prefix"))
marker = trimLeadingSlash(values.Get("key-marker"))
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker")
@@ -86,8 +86,8 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
maxkeys = maxObjectList
}
prefix = values.Get("prefix")
startAfter = values.Get("start-after")
prefix = trimLeadingSlash(values.Get("prefix"))
startAfter = trimLeadingSlash(values.Get("start-after"))
delimiter = values.Get("delimiter")
fetchOwner = values.Get("fetch-owner") == "true"
encodingType = values.Get("encoding-type")
@@ -117,8 +117,8 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
maxUploads = maxUploadsList
}
prefix = values.Get("prefix")
keyMarker = values.Get("key-marker")
prefix = trimLeadingSlash(values.Get("prefix"))
keyMarker = trimLeadingSlash(values.Get("key-marker"))
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type")

View File

@@ -48,6 +48,12 @@ type LocationResponse struct {
Location string `xml:",chardata"`
}
// PolicyStatus captures information returned by GetBucketPolicyStatusHandler
type PolicyStatus struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ PolicyStatus" json:"-"`
IsPublic string
}
// ListVersionsResponse - format for list bucket versions response.
type ListVersionsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListVersionsResult" json:"-"`
@@ -388,7 +394,7 @@ func getObjectLocation(r *http.Request, domains []string, bucket, object string)
}
proto := handlers.GetSourceScheme(r)
if proto == "" {
proto = getURLScheme(globalIsSSL)
proto = getURLScheme(globalIsTLS)
}
u := &url.URL{
Host: r.Host,
@@ -410,9 +416,11 @@ func getObjectLocation(r *http.Request, domains []string, bucket, object string)
func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
listbuckets := make([]Bucket, 0, len(buckets))
var data = ListBucketsResponse{}
var owner = Owner{}
var owner = Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
owner.ID = globalMinioDefaultOwnerID
for _, bucket := range buckets {
var listbucket = Bucket{}
listbucket.Name = bucket.Name
@@ -429,10 +437,12 @@ func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
// generates an ListBucketVersions response for the said bucket with other enumerated options.
func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delimiter, encodingType string, maxKeys int, resp ListObjectVersionsInfo) ListVersionsResponse {
versions := make([]ObjectVersion, 0, len(resp.Objects))
var owner = Owner{}
var owner = Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
var data = ListVersionsResponse{}
owner.ID = globalMinioDefaultOwnerID
for _, object := range resp.Objects {
var content = ObjectVersion{}
if object.Name == "" {
@@ -485,10 +495,12 @@ func generateListVersionsResponse(bucket, prefix, marker, versionIDMarker, delim
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingType string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
contents := make([]Object, 0, len(resp.Objects))
var owner = Owner{}
var owner = Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
var data = ListObjectsResponse{}
owner.ID = globalMinioDefaultOwnerID
for _, object := range resp.Objects {
var content = Object{}
if object.Name == "" {
@@ -532,12 +544,11 @@ func generateListObjectsV1Response(bucket, prefix, marker, delimiter, encodingTy
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter, delimiter, encodingType string, fetchOwner, isTruncated bool, maxKeys int, objects []ObjectInfo, prefixes []string, metadata bool) ListObjectsV2Response {
contents := make([]Object, 0, len(objects))
var owner = Owner{}
var data = ListObjectsV2Response{}
if fetchOwner {
owner.ID = globalMinioDefaultOwnerID
var owner = Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: "minio",
}
var data = ListObjectsV2Response{}
for _, object := range objects {
var content = Object{}
@@ -565,7 +576,7 @@ func generateListObjectsV2Response(bucket, prefix, token, nextToken, startAfter,
continue
}
// https://github.com/google/security-research/security/advisories/GHSA-76wf-9vgp-pj7w
if strings.EqualFold(k, xhttp.AmzMetaUnencryptedContentLength) || strings.EqualFold(k, xhttp.AmzMetaUnencryptedContentMD5) {
if equals(k, xhttp.AmzMetaUnencryptedContentLength, xhttp.AmzMetaUnencryptedContentMD5) {
continue
}
content.UserMetadata[k] = v
@@ -639,8 +650,16 @@ func generateListPartsResponse(partsInfo ListPartsInfo, encodingType string) Lis
listPartsResponse.Key = s3EncodeName(partsInfo.Object, encodingType)
listPartsResponse.UploadID = partsInfo.UploadID
listPartsResponse.StorageClass = globalMinioDefaultStorageClass
listPartsResponse.Initiator.ID = globalMinioDefaultOwnerID
listPartsResponse.Owner.ID = globalMinioDefaultOwnerID
// Dumb values not meaningful
listPartsResponse.Initiator = Initiator{
ID: globalMinioDefaultOwnerID,
DisplayName: globalMinioDefaultOwnerID,
}
listPartsResponse.Owner = Owner{
ID: globalMinioDefaultOwnerID,
DisplayName: globalMinioDefaultOwnerID,
}
listPartsResponse.MaxParts = partsInfo.MaxParts
listPartsResponse.PartNumberMarker = partsInfo.PartNumberMarker

View File

@@ -27,23 +27,35 @@ import (
)
func newHTTPServerFn() *xhttp.Server {
globalObjLayerMutex.Lock()
defer globalObjLayerMutex.Unlock()
globalObjLayerMutex.RLock()
defer globalObjLayerMutex.RUnlock()
return globalHTTPServer
}
func newObjectLayerFn() ObjectLayer {
func setHTTPServer(h *xhttp.Server) {
globalObjLayerMutex.Lock()
defer globalObjLayerMutex.Unlock()
globalHTTPServer = h
globalObjLayerMutex.Unlock()
}
func newObjectLayerFn() ObjectLayer {
globalObjLayerMutex.RLock()
defer globalObjLayerMutex.RUnlock()
return globalObjectAPI
}
func newCachedObjectLayerFn() CacheObjectLayer {
globalObjLayerMutex.Lock()
defer globalObjLayerMutex.Unlock()
globalObjLayerMutex.RLock()
defer globalObjLayerMutex.RUnlock()
return globalCacheObjectAPI
}
func setCacheObjectLayer(c CacheObjectLayer) {
globalObjLayerMutex.Lock()
globalCacheObjectAPI = c
globalObjLayerMutex.Unlock()
}
func setObjectLayer(o ObjectLayer) {
globalObjLayerMutex.Lock()
globalObjectAPI = o
@@ -108,224 +120,229 @@ func registerAPIRouter(router *mux.Router) {
// Object operations
// HeadObject
bucket.Methods(http.MethodHead).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("headobject", httpTraceAll(api.HeadObjectHandler))))
collectAPIStats("headobject", maxClients(httpTraceAll(api.HeadObjectHandler))))
// CopyObjectPart
bucket.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").
HandlerFunc(maxClients(collectAPIStats("copyobjectpart", httpTraceAll(api.CopyObjectPartHandler)))).
HandlerFunc(collectAPIStats("copyobjectpart", maxClients(httpTraceAll(api.CopyObjectPartHandler)))).
Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// PutObjectPart
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectpart", httpTraceHdrs(api.PutObjectPartHandler)))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
collectAPIStats("putobjectpart", maxClients(httpTraceHdrs(api.PutObjectPartHandler)))).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// ListObjectParts
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("listobjectparts", httpTraceAll(api.ListObjectPartsHandler)))).Queries("uploadId", "{uploadId:.*}")
collectAPIStats("listobjectparts", maxClients(httpTraceAll(api.ListObjectPartsHandler)))).Queries("uploadId", "{uploadId:.*}")
// CompleteMultipartUpload
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("completemutipartupload", httpTraceAll(api.CompleteMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
collectAPIStats("completemutipartupload", maxClients(httpTraceAll(api.CompleteMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
// NewMultipartUpload
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("newmultipartupload", httpTraceAll(api.NewMultipartUploadHandler)))).Queries("uploads", "")
collectAPIStats("newmultipartupload", maxClients(httpTraceAll(api.NewMultipartUploadHandler)))).Queries("uploads", "")
// AbortMultipartUpload
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("abortmultipartupload", httpTraceAll(api.AbortMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
collectAPIStats("abortmultipartupload", maxClients(httpTraceAll(api.AbortMultipartUploadHandler)))).Queries("uploadId", "{uploadId:.*}")
// GetObjectACL - this is a dummy call.
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectacl", httpTraceHdrs(api.GetObjectACLHandler)))).Queries("acl", "")
collectAPIStats("getobjectacl", maxClients(httpTraceHdrs(api.GetObjectACLHandler)))).Queries("acl", "")
// PutObjectACL - this is a dummy call.
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectacl", httpTraceHdrs(api.PutObjectACLHandler)))).Queries("acl", "")
collectAPIStats("putobjectacl", maxClients(httpTraceHdrs(api.PutObjectACLHandler)))).Queries("acl", "")
// GetObjectTagging
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjecttagging", httpTraceHdrs(api.GetObjectTaggingHandler)))).Queries("tagging", "")
collectAPIStats("getobjecttagging", maxClients(httpTraceHdrs(api.GetObjectTaggingHandler)))).Queries("tagging", "")
// PutObjectTagging
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjecttagging", httpTraceHdrs(api.PutObjectTaggingHandler)))).Queries("tagging", "")
collectAPIStats("putobjecttagging", maxClients(httpTraceHdrs(api.PutObjectTaggingHandler)))).Queries("tagging", "")
// DeleteObjectTagging
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("deleteobjecttagging", httpTraceHdrs(api.DeleteObjectTaggingHandler)))).Queries("tagging", "")
collectAPIStats("deleteobjecttagging", maxClients(httpTraceHdrs(api.DeleteObjectTaggingHandler)))).Queries("tagging", "")
// SelectObjectContent
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("selectobjectcontent", httpTraceHdrs(api.SelectObjectContentHandler)))).Queries("select", "").Queries("select-type", "2")
collectAPIStats("selectobjectcontent", maxClients(httpTraceHdrs(api.SelectObjectContentHandler)))).Queries("select", "").Queries("select-type", "2")
// GetObjectRetention
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectretention", httpTraceAll(api.GetObjectRetentionHandler)))).Queries("retention", "")
collectAPIStats("getobjectretention", maxClients(httpTraceAll(api.GetObjectRetentionHandler)))).Queries("retention", "")
// GetObjectLegalHold
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobjectlegalhold", httpTraceAll(api.GetObjectLegalHoldHandler)))).Queries("legal-hold", "")
collectAPIStats("getobjectlegalhold", maxClients(httpTraceAll(api.GetObjectLegalHoldHandler)))).Queries("legal-hold", "")
// GetObject
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("getobject", httpTraceHdrs(api.GetObjectHandler))))
collectAPIStats("getobject", maxClients(httpTraceHdrs(api.GetObjectHandler))))
// CopyObject
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").
HandlerFunc(maxClients(collectAPIStats("copyobject", httpTraceAll(api.CopyObjectHandler))))
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp(xhttp.AmzCopySource, ".*?(\\/|%2F).*?").HandlerFunc(
collectAPIStats("copyobject", maxClients(httpTraceAll(api.CopyObjectHandler))))
// PutObjectRetention
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectretention", httpTraceAll(api.PutObjectRetentionHandler)))).Queries("retention", "")
collectAPIStats("putobjectretention", maxClients(httpTraceAll(api.PutObjectRetentionHandler)))).Queries("retention", "")
// PutObjectLegalHold
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobjectlegalhold", httpTraceAll(api.PutObjectLegalHoldHandler)))).Queries("legal-hold", "")
collectAPIStats("putobjectlegalhold", maxClients(httpTraceAll(api.PutObjectLegalHoldHandler)))).Queries("legal-hold", "")
// PutObject
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("putobject", httpTraceHdrs(api.PutObjectHandler))))
collectAPIStats("putobject", maxClients(httpTraceHdrs(api.PutObjectHandler))))
// DeleteObject
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("deleteobject", httpTraceAll(api.DeleteObjectHandler))))
collectAPIStats("deleteobject", maxClients(httpTraceAll(api.DeleteObjectHandler))))
// PostRestoreObject
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
collectAPIStats("restoreobject", maxClients(httpTraceAll(api.PostRestoreObjectHandler)))).Queries("restore", "")
/// Bucket operations
// GetBucketLocation
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlocation", httpTraceAll(api.GetBucketLocationHandler)))).Queries("location", "")
collectAPIStats("getbucketlocation", maxClients(httpTraceAll(api.GetBucketLocationHandler)))).Queries("location", "")
// GetBucketPolicy
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketpolicy", httpTraceAll(api.GetBucketPolicyHandler)))).Queries("policy", "")
collectAPIStats("getbucketpolicy", maxClients(httpTraceAll(api.GetBucketPolicyHandler)))).Queries("policy", "")
// GetBucketLifecycle
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
collectAPIStats("getbucketlifecycle", maxClients(httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
// GetBucketEncryption
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketencryption", httpTraceAll(api.GetBucketEncryptionHandler)))).Queries("encryption", "")
collectAPIStats("getbucketencryption", maxClients(httpTraceAll(api.GetBucketEncryptionHandler)))).Queries("encryption", "")
// GetBucketObjectLockConfig
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketobjectlockconfiguration", httpTraceAll(api.GetBucketObjectLockConfigHandler)))).Queries("object-lock", "")
collectAPIStats("getbucketobjectlockconfiguration", maxClients(httpTraceAll(api.GetBucketObjectLockConfigHandler)))).Queries("object-lock", "")
// GetBucketReplicationConfig
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketreplicationconfiguration", httpTraceAll(api.GetBucketReplicationConfigHandler)))).Queries("replication", "")
collectAPIStats("getbucketreplicationconfiguration", maxClients(httpTraceAll(api.GetBucketReplicationConfigHandler)))).Queries("replication", "")
// GetBucketVersioning
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketversioning", httpTraceAll(api.GetBucketVersioningHandler)))).Queries("versioning", "")
collectAPIStats("getbucketversioning", maxClients(httpTraceAll(api.GetBucketVersioningHandler)))).Queries("versioning", "")
// GetBucketNotification
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketnotification", httpTraceAll(api.GetBucketNotificationHandler)))).Queries("notification", "")
collectAPIStats("getbucketnotification", maxClients(httpTraceAll(api.GetBucketNotificationHandler)))).Queries("notification", "")
// ListenNotification
bucket.Methods(http.MethodGet).HandlerFunc(collectAPIStats("listennotification", httpTraceAll(api.ListenNotificationHandler))).Queries("events", "{events:.*}")
bucket.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listennotification", maxClients(httpTraceAll(api.ListenNotificationHandler)))).Queries("events", "{events:.*}")
// Dummy Bucket Calls
// GetBucketACL -- this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketacl", httpTraceAll(api.GetBucketACLHandler)))).Queries("acl", "")
collectAPIStats("getbucketacl", maxClients(httpTraceAll(api.GetBucketACLHandler)))).Queries("acl", "")
// PutBucketACL -- this is a dummy call.
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketacl", httpTraceAll(api.PutBucketACLHandler)))).Queries("acl", "")
collectAPIStats("putbucketacl", maxClients(httpTraceAll(api.PutBucketACLHandler)))).Queries("acl", "")
// GetBucketCors - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketcors", httpTraceAll(api.GetBucketCorsHandler)))).Queries("cors", "")
collectAPIStats("getbucketcors", maxClients(httpTraceAll(api.GetBucketCorsHandler)))).Queries("cors", "")
// GetBucketWebsiteHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketwebsite", httpTraceAll(api.GetBucketWebsiteHandler)))).Queries("website", "")
collectAPIStats("getbucketwebsite", maxClients(httpTraceAll(api.GetBucketWebsiteHandler)))).Queries("website", "")
// GetBucketAccelerateHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketaccelerate", httpTraceAll(api.GetBucketAccelerateHandler)))).Queries("accelerate", "")
collectAPIStats("getbucketaccelerate", maxClients(httpTraceAll(api.GetBucketAccelerateHandler)))).Queries("accelerate", "")
// GetBucketRequestPaymentHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketrequestpayment", httpTraceAll(api.GetBucketRequestPaymentHandler)))).Queries("requestPayment", "")
collectAPIStats("getbucketrequestpayment", maxClients(httpTraceAll(api.GetBucketRequestPaymentHandler)))).Queries("requestPayment", "")
// GetBucketLoggingHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlogging", httpTraceAll(api.GetBucketLoggingHandler)))).Queries("logging", "")
collectAPIStats("getbucketlogging", maxClients(httpTraceAll(api.GetBucketLoggingHandler)))).Queries("logging", "")
// GetBucketLifecycleHandler - this is a dummy call.
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbucketlifecycle", httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
collectAPIStats("getbucketlifecycle", maxClients(httpTraceAll(api.GetBucketLifecycleHandler)))).Queries("lifecycle", "")
// GetBucketTaggingHandler
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("getbuckettagging", httpTraceAll(api.GetBucketTaggingHandler)))).Queries("tagging", "")
collectAPIStats("getbuckettagging", maxClients(httpTraceAll(api.GetBucketTaggingHandler)))).Queries("tagging", "")
//DeleteBucketWebsiteHandler
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketwebsite", httpTraceAll(api.DeleteBucketWebsiteHandler)))).Queries("website", "")
collectAPIStats("deletebucketwebsite", maxClients(httpTraceAll(api.DeleteBucketWebsiteHandler)))).Queries("website", "")
// DeleteBucketTaggingHandler
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebuckettagging", httpTraceAll(api.DeleteBucketTaggingHandler)))).Queries("tagging", "")
collectAPIStats("deletebuckettagging", maxClients(httpTraceAll(api.DeleteBucketTaggingHandler)))).Queries("tagging", "")
// ListMultipartUploads
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listmultipartuploads", httpTraceAll(api.ListMultipartUploadsHandler)))).Queries("uploads", "")
collectAPIStats("listmultipartuploads", maxClients(httpTraceAll(api.ListMultipartUploadsHandler)))).Queries("uploads", "")
// ListObjectsV2M
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv2M", httpTraceAll(api.ListObjectsV2MHandler)))).Queries("list-type", "2", "metadata", "true")
collectAPIStats("listobjectsv2M", maxClients(httpTraceAll(api.ListObjectsV2MHandler)))).Queries("list-type", "2", "metadata", "true")
// ListObjectsV2
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv2", httpTraceAll(api.ListObjectsV2Handler)))).Queries("list-type", "2")
collectAPIStats("listobjectsv2", maxClients(httpTraceAll(api.ListObjectsV2Handler)))).Queries("list-type", "2")
// ListObjectVersions
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectversions", httpTraceAll(api.ListObjectVersionsHandler)))).Queries("versions", "")
// ListObjectsV1 (Legacy)
collectAPIStats("listobjectversions", maxClients(httpTraceAll(api.ListObjectVersionsHandler)))).Queries("versions", "")
// GetBucketPolicyStatus
bucket.Methods(http.MethodGet).HandlerFunc(
maxClients(collectAPIStats("listobjectsv1", httpTraceAll(api.ListObjectsV1Handler))))
collectAPIStats("getpolicystatus", maxClients(httpTraceAll(api.GetBucketPolicyStatusHandler)))).Queries("policyStatus", "")
// PutBucketLifecycle
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketlifecycle", httpTraceAll(api.PutBucketLifecycleHandler)))).Queries("lifecycle", "")
collectAPIStats("putbucketlifecycle", maxClients(httpTraceAll(api.PutBucketLifecycleHandler)))).Queries("lifecycle", "")
// PutBucketReplicationConfig
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketreplicationconfiguration", httpTraceAll(api.PutBucketReplicationConfigHandler)))).Queries("replication", "")
collectAPIStats("putbucketreplicationconfiguration", maxClients(httpTraceAll(api.PutBucketReplicationConfigHandler)))).Queries("replication", "")
// GetObjectRetention
// PutBucketEncryption
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketencryption", httpTraceAll(api.PutBucketEncryptionHandler)))).Queries("encryption", "")
collectAPIStats("putbucketencryption", maxClients(httpTraceAll(api.PutBucketEncryptionHandler)))).Queries("encryption", "")
// PutBucketPolicy
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketpolicy", httpTraceAll(api.PutBucketPolicyHandler)))).Queries("policy", "")
collectAPIStats("putbucketpolicy", maxClients(httpTraceAll(api.PutBucketPolicyHandler)))).Queries("policy", "")
// PutBucketObjectLockConfig
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketobjectlockconfig", httpTraceAll(api.PutBucketObjectLockConfigHandler)))).Queries("object-lock", "")
collectAPIStats("putbucketobjectlockconfig", maxClients(httpTraceAll(api.PutBucketObjectLockConfigHandler)))).Queries("object-lock", "")
// PutBucketTaggingHandler
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbuckettagging", httpTraceAll(api.PutBucketTaggingHandler)))).Queries("tagging", "")
collectAPIStats("putbuckettagging", maxClients(httpTraceAll(api.PutBucketTaggingHandler)))).Queries("tagging", "")
// PutBucketVersioning
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketversioning", httpTraceAll(api.PutBucketVersioningHandler)))).Queries("versioning", "")
collectAPIStats("putbucketversioning", maxClients(httpTraceAll(api.PutBucketVersioningHandler)))).Queries("versioning", "")
// PutBucketNotification
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucketnotification", httpTraceAll(api.PutBucketNotificationHandler)))).Queries("notification", "")
collectAPIStats("putbucketnotification", maxClients(httpTraceAll(api.PutBucketNotificationHandler)))).Queries("notification", "")
// PutBucket
bucket.Methods(http.MethodPut).HandlerFunc(
maxClients(collectAPIStats("putbucket", httpTraceAll(api.PutBucketHandler))))
collectAPIStats("putbucket", maxClients(httpTraceAll(api.PutBucketHandler))))
// HeadBucket
bucket.Methods(http.MethodHead).HandlerFunc(
maxClients(collectAPIStats("headbucket", httpTraceAll(api.HeadBucketHandler))))
collectAPIStats("headbucket", maxClients(httpTraceAll(api.HeadBucketHandler))))
// PostPolicy
bucket.Methods(http.MethodPost).HeadersRegexp(xhttp.ContentType, "multipart/form-data*").HandlerFunc(
maxClients(collectAPIStats("postpolicybucket", httpTraceHdrs(api.PostPolicyBucketHandler))))
collectAPIStats("postpolicybucket", maxClients(httpTraceHdrs(api.PostPolicyBucketHandler))))
// DeleteMultipleObjects
bucket.Methods(http.MethodPost).HandlerFunc(
maxClients(collectAPIStats("deletemultipleobjects", httpTraceAll(api.DeleteMultipleObjectsHandler)))).Queries("delete", "")
collectAPIStats("deletemultipleobjects", maxClients(httpTraceAll(api.DeleteMultipleObjectsHandler)))).Queries("delete", "")
// DeleteBucketPolicy
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketpolicy", httpTraceAll(api.DeleteBucketPolicyHandler)))).Queries("policy", "")
collectAPIStats("deletebucketpolicy", maxClients(httpTraceAll(api.DeleteBucketPolicyHandler)))).Queries("policy", "")
// DeleteBucketReplication
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketreplicationconfiguration", httpTraceAll(api.DeleteBucketReplicationConfigHandler)))).Queries("replication", "")
collectAPIStats("deletebucketreplicationconfiguration", maxClients(httpTraceAll(api.DeleteBucketReplicationConfigHandler)))).Queries("replication", "")
// DeleteBucketLifecycle
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketlifecycle", httpTraceAll(api.DeleteBucketLifecycleHandler)))).Queries("lifecycle", "")
collectAPIStats("deletebucketlifecycle", maxClients(httpTraceAll(api.DeleteBucketLifecycleHandler)))).Queries("lifecycle", "")
// DeleteBucketEncryption
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucketencryption", httpTraceAll(api.DeleteBucketEncryptionHandler)))).Queries("encryption", "")
collectAPIStats("deletebucketencryption", maxClients(httpTraceAll(api.DeleteBucketEncryptionHandler)))).Queries("encryption", "")
// DeleteBucket
bucket.Methods(http.MethodDelete).HandlerFunc(
maxClients(collectAPIStats("deletebucket", httpTraceAll(api.DeleteBucketHandler))))
// PostRestoreObject
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(
maxClients(collectAPIStats("restoreobject", httpTraceAll(api.PostRestoreObjectHandler)))).Queries("restore", "")
collectAPIStats("deletebucket", maxClients(httpTraceAll(api.DeleteBucketHandler))))
// ListObjectsV1 (Legacy)
bucket.Methods(http.MethodGet).HandlerFunc(
collectAPIStats("listobjectsv1", maxClients(httpTraceAll(api.ListObjectsV1Handler))))
}
/// Root operation
// ListenNotification
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(
collectAPIStats("listennotification", httpTraceAll(api.ListenNotificationHandler))).Queries("events", "{events:.*}")
collectAPIStats("listennotification", maxClients(httpTraceAll(api.ListenNotificationHandler)))).Queries("events", "{events:.*}")
// ListBuckets
apiRouter.Methods(http.MethodGet).Path(SlashSeparator).HandlerFunc(
maxClients(collectAPIStats("listbuckets", httpTraceAll(api.ListBucketsHandler))))
collectAPIStats("listbuckets", maxClients(httpTraceAll(api.ListBucketsHandler))))
// S3 browser with signature v4 adds '//' for ListBuckets request, so rather
// than failing with UnknownAPIRequest we simply handle it for now.
apiRouter.Methods(http.MethodGet).Path(SlashSeparator + SlashSeparator).HandlerFunc(
maxClients(collectAPIStats("listbuckets", httpTraceAll(api.ListBucketsHandler))))
collectAPIStats("listbuckets", maxClients(httpTraceAll(api.ListBucketsHandler))))
// If none of the routes match add default error handler routes
apiRouter.NotFoundHandler = collectAPIStats("notfound", httpTraceAll(errorResponseHandler))

View File

@@ -36,6 +36,7 @@ import (
"github.com/minio/minio/pkg/auth"
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/etag"
"github.com/minio/minio/pkg/hash"
iampolicy "github.com/minio/minio/pkg/iam/policy"
)
@@ -151,15 +152,17 @@ func validateAdminSignature(ctx context.Context, r *http.Request, region string)
return cred, claims, owner, ErrNone
}
// checkAdminRequestAuthType checks whether the request is a valid signature V2 or V4 request.
// It does not accept presigned or JWT or anonymous requests.
func checkAdminRequestAuthType(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
// checkAdminRequestAuth checks for authentication and authorization for the incoming
// request. It only accepts V2 and V4 requests. Presigned, JWT and anonymous requests
// are automatically rejected.
func checkAdminRequestAuth(ctx context.Context, r *http.Request, action iampolicy.AdminAction, region string) (auth.Credentials, APIErrorCode) {
cred, claims, owner, s3Err := validateAdminSignature(ctx, r, region)
if s3Err != ErrNone {
return cred, s3Err
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
IsOwner: owner,
@@ -184,12 +187,12 @@ func getSessionToken(r *http.Request) (token string) {
// Fetch claims in the security token returned by the client, doesn't return
// errors - upon errors the returned claims map will be empty.
func mustGetClaimsFromToken(r *http.Request) map[string]interface{} {
claims, _ := getClaimsFromToken(r, getSessionToken(r))
claims, _ := getClaimsFromToken(getSessionToken(r))
return claims
}
// Fetch claims in the security token returned by the client.
func getClaimsFromToken(r *http.Request, token string) (map[string]interface{}, error) {
func getClaimsFromToken(token string) (map[string]interface{}, error) {
claims := xjwt.NewMapClaims()
if token == "" {
return claims.Map(), nil
@@ -236,7 +239,7 @@ func getClaimsFromToken(r *http.Request, token string) (map[string]interface{},
if err != nil {
// Base64 decoding fails, we should log to indicate
// something is malforming the request sent by client.
logger.LogIf(r.Context(), err, logger.Application)
logger.LogIf(GlobalContext, err, logger.Application)
return nil, errAuthentication
}
claims.MapClaims[iampolicy.SessionPolicyName] = string(spBytes)
@@ -257,7 +260,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
if subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
return nil, ErrInvalidToken
}
claims, err := getClaimsFromToken(r, token)
claims, err := getClaimsFromToken(token)
if err != nil {
return nil, toAPIErrorCode(r.Context(), err)
}
@@ -270,7 +273,7 @@ func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]in
// for authenticated requests validates IAM policies.
// returns APIErrorCode if any to be replied to the client.
func checkRequestAuthType(ctx context.Context, r *http.Request, action policy.Action, bucketName, objectName string) (s3Err APIErrorCode) {
_, _, s3Err = checkRequestAuthTypeToAccessKey(ctx, r, action, bucketName, objectName)
_, _, s3Err = checkRequestAuthTypeCredential(ctx, r, action, bucketName, objectName)
return s3Err
}
@@ -280,14 +283,13 @@ func checkRequestAuthType(ctx context.Context, r *http.Request, action policy.Ac
// for authenticated requests validates IAM policies.
// returns APIErrorCode if any to be replied to the client.
// Additionally returns the accessKey used in the request, and if this request is by an admin.
func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, action policy.Action, bucketName, objectName string) (accessKey string, owner bool, s3Err APIErrorCode) {
var cred auth.Credentials
func checkRequestAuthTypeCredential(ctx context.Context, r *http.Request, action policy.Action, bucketName, objectName string) (cred auth.Credentials, owner bool, s3Err APIErrorCode) {
switch getRequestAuthType(r) {
case authTypeUnknown, authTypeStreamingSigned:
return accessKey, owner, ErrSignatureVersionNotSupported
return cred, owner, ErrSignatureVersionNotSupported
case authTypePresignedV2, authTypeSignedV2:
if s3Err = isReqAuthenticatedV2(r); s3Err != ErrNone {
return accessKey, owner, s3Err
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV2(r)
case authTypeSigned, authTypePresigned:
@@ -297,18 +299,18 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
region = ""
}
if s3Err = isReqAuthenticated(ctx, r, region, serviceS3); s3Err != ErrNone {
return accessKey, owner, s3Err
return cred, owner, s3Err
}
cred, owner, s3Err = getReqAccessKeyV4(r, region, serviceS3)
}
if s3Err != ErrNone {
return accessKey, owner, s3Err
return cred, owner, s3Err
}
var claims map[string]interface{}
claims, s3Err = checkClaimsFromToken(r, cred)
if s3Err != ErrNone {
return accessKey, owner, s3Err
return cred, owner, s3Err
}
// LocationConstraint is valid only for CreateBucketAction.
@@ -318,7 +320,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
payload, err := ioutil.ReadAll(io.LimitReader(r.Body, maxLocationConstraintSize))
if err != nil {
logger.LogIf(ctx, err, logger.Application)
return accessKey, owner, ErrMalformedXML
return cred, owner, ErrMalformedXML
}
// Populate payload to extract location constraint.
@@ -327,7 +329,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
var s3Error APIErrorCode
locationConstraint, s3Error = parseLocationConstraint(r)
if s3Error != ErrNone {
return accessKey, owner, s3Error
return cred, owner, s3Error
}
// Populate payload again to handle it in HTTP handler.
@@ -348,7 +350,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
ObjectName: objectName,
}) {
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
return cred, owner, ErrNone
}
if action == policy.ListBucketVersionsAction {
@@ -363,15 +365,16 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
ObjectName: objectName,
}) {
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
return cred, owner, ErrNone
}
}
return cred.AccessKey, owner, ErrAccessDenied
return cred, owner, ErrAccessDenied
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.Action(action),
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
@@ -380,7 +383,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
Claims: claims,
}) {
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
return cred, owner, ErrNone
}
if action == policy.ListBucketVersionsAction {
@@ -388,6 +391,7 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
// verify as a fallback.
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
@@ -396,11 +400,11 @@ func checkRequestAuthTypeToAccessKey(ctx context.Context, r *http.Request, actio
Claims: claims,
}) {
// Request is allowed return the appropriate access key.
return cred.AccessKey, owner, ErrNone
return cred, owner, ErrNone
}
}
return cred.AccessKey, owner, ErrAccessDenied
return cred, owner, ErrAccessDenied
}
// Verify if request has valid AWS Signature Version '2'.
@@ -429,19 +433,14 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
return errCode
}
var (
err error
contentMD5, contentSHA256 []byte
)
// Extract 'Content-Md5' if present.
contentMD5, err = checkValidMD5(r.Header)
clientETag, err := etag.FromContentMD5(r.Header)
if err != nil {
return ErrInvalidDigest
}
// Extract either 'X-Amz-Content-Sha256' header or 'X-Amz-Content-Sha256' query parameter (if V4 presigned)
// Do not verify 'X-Amz-Content-Sha256' if skipSHA256.
var contentSHA256 []byte
if skipSHA256 := skipContentSha256Cksum(r); !skipSHA256 && isRequestPresignedSignatureV4(r) {
if sha256Sum, ok := r.URL.Query()[xhttp.AmzContentSha256]; ok && len(sha256Sum) > 0 {
contentSHA256, err = hex.DecodeString(sha256Sum[0])
@@ -458,8 +457,7 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
// Verify 'Content-Md5' and/or 'X-Amz-Content-Sha256' if present.
// The verification happens implicit during reading.
reader, err := hash.NewReader(r.Body, -1, hex.EncodeToString(contentMD5),
hex.EncodeToString(contentSHA256), -1, globalCLIContext.StrictS3Compat)
reader, err := hash.NewReader(r.Body, -1, clientETag.String(), hex.EncodeToString(contentSHA256), -1)
if err != nil {
return toAPIErrorCode(ctx, err)
}
@@ -467,16 +465,6 @@ func isReqAuthenticated(ctx context.Context, r *http.Request, region string, sty
return ErrNone
}
// authHandler - handles all the incoming authorization headers and validates them if possible.
type authHandler struct {
handler http.Handler
}
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
return authHandler{h}
}
// List of all support S3 auth types.
var supportedS3AuthTypes = map[authType]struct{}{
authTypeAnonymous: {},
@@ -494,26 +482,30 @@ func isSupportedS3AuthType(aType authType) bool {
return ok
}
// handler for validating incoming authorization headers.
func (a authHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if isSupportedS3AuthType(aType) {
// Let top level caller validate for anonymous and known signed requests.
a.handler.ServeHTTP(w, r)
return
} else if aType == authTypeJWT {
// Validate Authorization header if its valid for JWT request.
if _, _, authErr := webRequestAuthenticate(r); authErr != nil {
w.WriteHeader(http.StatusUnauthorized)
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
// handler for validating incoming authorization headers.
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if isSupportedS3AuthType(aType) {
// Let top level caller validate for anonymous and known signed requests.
h.ServeHTTP(w, r)
return
} else if aType == authTypeJWT {
// Validate Authorization header if its valid for JWT request.
if _, _, authErr := webRequestAuthenticate(r); authErr != nil {
w.WriteHeader(http.StatusUnauthorized)
w.Write([]byte(authErr.Error()))
return
}
h.ServeHTTP(w, r)
return
} else if aType == authTypeSTS {
h.ServeHTTP(w, r)
return
}
a.handler.ServeHTTP(w, r)
return
} else if aType == authTypeSTS {
a.handler.ServeHTTP(w, r)
return
}
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL, guessIsBrowserReq(r))
writeErrorResponse(r.Context(), w, errorCodes.ToAPIErr(ErrSignatureVersionNotSupported), r.URL, guessIsBrowserReq(r))
})
}
func validateSignature(atype authType, r *http.Request) (auth.Credentials, bool, map[string]interface{}, APIErrorCode) {
@@ -559,6 +551,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
if retMode == objectlock.RetGovernance && byPassSet {
byPassSet = globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.BypassGovernanceRetentionAction,
BucketName: bucketName,
ConditionValues: conditions,
@@ -568,6 +561,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
}
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.PutObjectRetentionAction,
BucketName: bucketName,
ConditionValues: conditions,
@@ -591,6 +585,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
if retMode == objectlock.RetGovernance && byPassSet {
byPassSet = globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.BypassGovernanceRetentionAction,
BucketName: bucketName,
ObjectName: objectName,
@@ -601,6 +596,7 @@ func isPutRetentionAllowed(bucketName, objectName string, retDays int, retDate t
}
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.PutObjectRetentionAction,
BucketName: bucketName,
ConditionValues: conditions,
@@ -656,6 +652,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
if cred.AccessKey == "" {
if globalPolicySys.IsAllowed(policy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: policy.Action(action),
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", "", nil),
@@ -669,6 +666,7 @@ func isPutActionAllowed(ctx context.Context, atype authType, bucketName, objectN
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: action,
BucketName: bucketName,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),

View File

@@ -421,7 +421,7 @@ func TestCheckAdminRequestAuthType(t *testing.T) {
}
ctx := context.Background()
for i, testCase := range testCases {
if _, s3Error := checkAdminRequestAuthType(ctx, testCase.Request, iampolicy.AllAdminActions, globalServerRegion); s3Error != testCase.ErrCode {
if _, s3Error := checkAdminRequestAuth(ctx, testCase.Request, iampolicy.AllAdminActions, globalServerRegion); s3Error != testCase.ErrCode {
t.Errorf("Test %d: Unexpected s3error returned wanted %d, got %d", i, testCase.ErrCode, s3Error)
}
}

View File

@@ -61,23 +61,29 @@ func waitForLowHTTPReq(maxIO int, maxWait time.Duration) {
}
// At max 10 attempts to wait with 100 millisecond interval before proceeding
waitCount := 10
waitTick := 100 * time.Millisecond
// Bucket notification and http trace are not costly, it is okay to ignore them
// while counting the number of concurrent connections
maxIOFn := func() int {
return maxIO + globalHTTPListen.NumSubscribers() + globalHTTPTrace.NumSubscribers()
return maxIO + int(globalHTTPListen.NumSubscribers()) + int(globalHTTPTrace.NumSubscribers())
}
tmpMaxWait := maxWait
if httpServer := newHTTPServerFn(); httpServer != nil {
// Any requests in progress, delay the heal.
for httpServer.GetRequestCount() >= maxIOFn() {
time.Sleep(waitTick)
waitCount--
if waitCount == 0 {
if tmpMaxWait > 0 {
if tmpMaxWait < waitTick {
time.Sleep(tmpMaxWait)
} else {
time.Sleep(waitTick)
}
tmpMaxWait = tmpMaxWait - waitTick
}
if tmpMaxWait <= 0 {
if intDataUpdateTracker.debug {
logger.Info("waitForLowHTTPReq: waited %d times, resuming", waitCount)
logger.Info("waitForLowHTTPReq: waited max %s, resuming", maxWait)
}
break
}
@@ -91,20 +97,22 @@ func (h *healRoutine) run(ctx context.Context, objAPI ObjectLayer) {
select {
case task, ok := <-h.tasks:
if !ok {
break
return
}
var res madmin.HealResultItem
var err error
switch {
case task.bucket == nopHeal:
switch task.bucket {
case nopHeal:
continue
case task.bucket == SlashSeparator:
case SlashSeparator:
res, err = healDiskFormat(ctx, objAPI, task.opts)
case task.bucket != "" && task.object == "":
res, err = objAPI.HealBucket(ctx, task.bucket, task.opts.DryRun, task.opts.Remove)
case task.bucket != "" && task.object != "":
res, err = objAPI.HealObject(ctx, task.bucket, task.object, task.versionID, task.opts)
default:
if task.object == "" {
res, err = objAPI.HealBucket(ctx, task.bucket, task.opts)
} else {
res, err = objAPI.HealObject(ctx, task.bucket, task.object, task.versionID, task.opts)
}
}
task.responseCh <- healResult{result: res, err: err}

View File

@@ -17,13 +17,23 @@
package cmd
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"sort"
"strings"
"sync"
"time"
"github.com/dustin/go-humanize"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/color"
"github.com/minio/minio/pkg/console"
"github.com/minio/minio/pkg/madmin"
)
const (
@@ -32,10 +42,200 @@ const (
)
//go:generate msgp -file $GOFILE -unexported
type healingTracker struct {
ID string
// future add more tracking capabilities
// healingTracker is used to persist healing information during a heal.
type healingTracker struct {
disk StorageAPI `msg:"-"`
ID string
PoolIndex int
SetIndex int
DiskIndex int
Path string
Endpoint string
Started time.Time
LastUpdate time.Time
ObjectsHealed uint64
ObjectsFailed uint64
BytesDone uint64
BytesFailed uint64
// Last object scanned.
Bucket string `json:"-"`
Object string `json:"-"`
// Numbers when current bucket started healing,
// for resuming with correct numbers.
ResumeObjectsHealed uint64 `json:"-"`
ResumeObjectsFailed uint64 `json:"-"`
ResumeBytesDone uint64 `json:"-"`
ResumeBytesFailed uint64 `json:"-"`
// Filled on startup/restarts.
QueuedBuckets []string
// Filled during heal.
HealedBuckets []string
// Add future tracking capabilities
// Be sure that they are included in toHealingDisk
}
// loadHealingTracker will load the healing tracker from the supplied disk.
// The disk ID will be validated against the loaded one.
func loadHealingTracker(ctx context.Context, disk StorageAPI) (*healingTracker, error) {
if disk == nil {
return nil, errors.New("loadHealingTracker: nil disk given")
}
diskID, err := disk.GetDiskID()
if err != nil {
return nil, err
}
b, err := disk.ReadAll(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename))
if err != nil {
return nil, err
}
var h healingTracker
_, err = h.UnmarshalMsg(b)
if err != nil {
return nil, err
}
if h.ID != diskID && h.ID != "" {
return nil, fmt.Errorf("loadHealingTracker: disk id mismatch expected %s, got %s", h.ID, diskID)
}
h.disk = disk
h.ID = diskID
return &h, nil
}
// newHealingTracker will create a new healing tracker for the disk.
func newHealingTracker(disk StorageAPI) *healingTracker {
diskID, _ := disk.GetDiskID()
h := healingTracker{
disk: disk,
ID: diskID,
Path: disk.String(),
Endpoint: disk.Endpoint().String(),
Started: time.Now().UTC(),
}
h.PoolIndex, h.SetIndex, h.DiskIndex = disk.GetDiskLoc()
return &h
}
// update will update the tracker on the disk.
// If the tracker has been deleted an error is returned.
func (h *healingTracker) update(ctx context.Context) error {
if h.disk.Healing() == nil {
return fmt.Errorf("healingTracker: disk %q is not marked as healing", h.ID)
}
if h.ID == "" || h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
h.ID, _ = h.disk.GetDiskID()
h.PoolIndex, h.SetIndex, h.DiskIndex = h.disk.GetDiskLoc()
}
return h.save(ctx)
}
// save will unconditionally save the tracker and will be created if not existing.
func (h *healingTracker) save(ctx context.Context) error {
if h.PoolIndex < 0 || h.SetIndex < 0 || h.DiskIndex < 0 {
// Attempt to get location.
if api := newObjectLayerFn(); api != nil {
if ep, ok := api.(*erasureServerPools); ok {
h.PoolIndex, h.SetIndex, h.DiskIndex, _ = ep.getPoolAndSet(h.ID)
}
}
}
h.LastUpdate = time.Now().UTC()
htrackerBytes, err := h.MarshalMsg(nil)
if err != nil {
return err
}
globalBackgroundHealState.updateHealStatus(h)
return h.disk.WriteAll(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename),
htrackerBytes)
}
// delete the tracker on disk.
func (h *healingTracker) delete(ctx context.Context) error {
return h.disk.Delete(ctx, minioMetaBucket,
pathJoin(bucketMetaPrefix, slashSeparator, healingTrackerFilename),
false)
}
func (h *healingTracker) isHealed(bucket string) bool {
for _, v := range h.HealedBuckets {
if v == bucket {
return true
}
}
return false
}
// resume will reset progress to the numbers at the start of the bucket.
func (h *healingTracker) resume() {
h.ObjectsHealed = h.ResumeObjectsHealed
h.ObjectsFailed = h.ResumeObjectsFailed
h.BytesDone = h.ResumeBytesDone
h.BytesFailed = h.ResumeBytesFailed
}
// bucketDone should be called when a bucket is done healing.
// Adds the bucket to the list of healed buckets and updates resume numbers.
func (h *healingTracker) bucketDone(bucket string) {
h.ResumeObjectsHealed = h.ObjectsHealed
h.ResumeObjectsFailed = h.ObjectsFailed
h.ResumeBytesDone = h.BytesDone
h.ResumeBytesFailed = h.BytesFailed
h.HealedBuckets = append(h.HealedBuckets, bucket)
for i, b := range h.QueuedBuckets {
if b == bucket {
// Delete...
h.QueuedBuckets = append(h.QueuedBuckets[:i], h.QueuedBuckets[i+1:]...)
}
}
}
// setQueuedBuckets will add buckets, but exclude any that is already in h.HealedBuckets.
// Order is preserved.
func (h *healingTracker) setQueuedBuckets(buckets []BucketInfo) {
s := set.CreateStringSet(h.HealedBuckets...)
h.QueuedBuckets = make([]string, 0, len(buckets))
for _, b := range buckets {
if !s.Contains(b.Name) {
h.QueuedBuckets = append(h.QueuedBuckets, b.Name)
}
}
}
func (h *healingTracker) printTo(writer io.Writer) {
b, err := json.MarshalIndent(h, "", " ")
if err != nil {
writer.Write([]byte(err.Error()))
}
writer.Write(b)
}
// toHealingDisk converts the information to madmin.HealingDisk
func (h *healingTracker) toHealingDisk() madmin.HealingDisk {
return madmin.HealingDisk{
ID: h.ID,
Endpoint: h.Endpoint,
PoolIndex: h.PoolIndex,
SetIndex: h.SetIndex,
DiskIndex: h.DiskIndex,
Path: h.Path,
Started: h.Started.UTC(),
LastUpdate: h.LastUpdate.UTC(),
ObjectsHealed: h.ObjectsHealed,
ObjectsFailed: h.ObjectsFailed,
BytesDone: h.BytesDone,
BytesFailed: h.BytesFailed,
Bucket: h.Bucket,
Object: h.Object,
QueuedBuckets: h.QueuedBuckets,
HealedBuckets: h.HealedBuckets,
}
}
func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
@@ -46,16 +246,7 @@ func initAutoHeal(ctx context.Context, objAPI ObjectLayer) {
initBackgroundHealing(ctx, objAPI) // start quick background healing
var bgSeq *healSequence
var found bool
for {
bgSeq, found = globalBackgroundHealState.getHealSequenceByToken(bgHealingUUID)
if found {
break
}
time.Sleep(time.Second)
}
bgSeq := mustGetHealSequence(ctx)
globalBackgroundHealState.pushHealLocalDisks(getLocalDisksToHeal()...)
@@ -96,7 +287,7 @@ func getLocalDisksToHeal() (disksToHeal Endpoints) {
disk, _, err := connectEndpoint(endpoint)
if errors.Is(err, errUnformattedDisk) {
disksToHeal = append(disksToHeal, endpoint)
} else if err == nil && disk != nil && disk.Healing() {
} else if err == nil && disk != nil && disk.Healing() != nil {
disksToHeal = append(disksToHeal, disk.Endpoint())
}
}
@@ -118,15 +309,20 @@ func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
// 2. Only the node hosting the disk is responsible to perform the heal
func monitorLocalDisksAndHeal(ctx context.Context, z *erasureServerPools, bgSeq *healSequence) {
// Perform automatic disk healing when a disk is replaced locally.
wait:
diskCheckTimer := time.NewTimer(defaultMonitorNewDiskInterval)
defer diskCheckTimer.Stop()
for {
select {
case <-ctx.Done():
return
case <-time.After(defaultMonitorNewDiskInterval):
var erasureSetInZoneDisksToHeal []map[int][]StorageAPI
case <-diskCheckTimer.C:
// Reset to next interval.
diskCheckTimer.Reset(defaultMonitorNewDiskInterval)
healDisks := globalBackgroundHealState.getHealLocalDisks()
var erasureSetInPoolDisksToHeal []map[int][]StorageAPI
healDisks := globalBackgroundHealState.getHealLocalDiskEndpoints()
if len(healDisks) > 0 {
// Reformat disks
bgSeq.sourceCh <- healSource{bucket: SlashSeparator}
@@ -137,12 +333,16 @@ wait:
logger.Info(fmt.Sprintf("Found drives to heal %d, proceeding to heal content...",
len(healDisks)))
erasureSetInZoneDisksToHeal = make([]map[int][]StorageAPI, len(z.serverPools))
erasureSetInPoolDisksToHeal = make([]map[int][]StorageAPI, len(z.serverPools))
for i := range z.serverPools {
erasureSetInZoneDisksToHeal[i] = map[int][]StorageAPI{}
erasureSetInPoolDisksToHeal[i] = map[int][]StorageAPI{}
}
}
if serverDebugLog {
console.Debugf(color.Green("healDisk:")+" disk check timer fired, attempting to heal %d drives\n", len(healDisks))
}
// heal only if new disks found.
for _, endpoint := range healDisks {
disk, format, err := connectEndpoint(endpoint)
@@ -151,69 +351,96 @@ wait:
continue
}
zoneIdx := globalEndpoints.GetLocalZoneIdx(disk.Endpoint())
if zoneIdx < 0 {
poolIdx := globalEndpoints.GetLocalPoolIdx(disk.Endpoint())
if poolIdx < 0 {
continue
}
// Calculate the set index where the current endpoint belongs
z.serverPools[zoneIdx].erasureDisksMu.RLock()
z.serverPools[poolIdx].erasureDisksMu.RLock()
// Protect reading reference format.
setIndex, _, err := findDiskIndex(z.serverPools[zoneIdx].format, format)
z.serverPools[zoneIdx].erasureDisksMu.RUnlock()
setIndex, _, err := findDiskIndex(z.serverPools[poolIdx].format, format)
z.serverPools[poolIdx].erasureDisksMu.RUnlock()
if err != nil {
printEndpointError(endpoint, err, false)
continue
}
erasureSetInZoneDisksToHeal[zoneIdx][setIndex] = append(erasureSetInZoneDisksToHeal[zoneIdx][setIndex], disk)
erasureSetInPoolDisksToHeal[poolIdx][setIndex] = append(erasureSetInPoolDisksToHeal[poolIdx][setIndex], disk)
}
buckets, _ := z.ListBucketsHeal(ctx)
for i, setMap := range erasureSetInZoneDisksToHeal {
for setIndex, disks := range setMap {
for _, disk := range disks {
logger.Info("Healing disk '%s' on %s zone", disk, humanize.Ordinal(i+1))
buckets, _ := z.ListBuckets(ctx)
// So someone changed the drives underneath, healing tracker missing.
if !disk.Healing() {
logger.Info("Healing tracker missing on '%s', disk was swapped again on %s zone", disk, humanize.Ordinal(i+1))
diskID, err := disk.GetDiskID()
buckets = append(buckets, BucketInfo{
Name: pathJoin(minioMetaBucket, minioConfigPrefix),
})
// Buckets data are dispersed in multiple zones/sets, make
// sure to heal all bucket metadata configuration.
buckets = append(buckets, []BucketInfo{
{Name: pathJoin(minioMetaBucket, bucketMetaPrefix)},
}...)
// Heal latest buckets first.
sort.Slice(buckets, func(i, j int) bool {
a, b := strings.HasPrefix(buckets[i].Name, minioMetaBucket), strings.HasPrefix(buckets[j].Name, minioMetaBucket)
if a != b {
return a
}
return buckets[i].Created.After(buckets[j].Created)
})
// TODO(klauspost): This will block until all heals are done,
// in the future this should be able to start healing other sets at once.
var wg sync.WaitGroup
for i, setMap := range erasureSetInPoolDisksToHeal {
i := i
for setIndex, disks := range setMap {
if len(disks) == 0 {
continue
}
wg.Add(1)
go func(setIndex int, disks []StorageAPI) {
defer wg.Done()
for _, disk := range disks {
logger.Info("Healing disk '%v' on %s pool", disk, humanize.Ordinal(i+1))
// So someone changed the drives underneath, healing tracker missing.
tracker, err := loadHealingTracker(ctx, disk)
if err != nil {
logger.LogIf(ctx, err)
// reading format.json failed or not found, proceed to look
// for new disks to be healed again, we cannot proceed further.
goto wait
logger.Info("Healing tracker missing on '%s', disk was swapped again on %s pool", disk, humanize.Ordinal(i+1))
tracker = newHealingTracker(disk)
}
if err := saveHealingTracker(disk, diskID); err != nil {
tracker.PoolIndex, tracker.SetIndex, tracker.DiskIndex = disk.GetDiskLoc()
tracker.setQueuedBuckets(buckets)
if err := tracker.save(ctx); err != nil {
logger.LogIf(ctx, err)
// Unable to write healing tracker, permission denied or some
// other unexpected error occurred. Proceed to look for new
// disks to be healed again, we cannot proceed further.
goto wait
return
}
err = z.serverPools[i].sets[setIndex].healErasureSet(ctx, buckets, tracker)
if err != nil {
logger.LogIf(ctx, err)
continue
}
logger.Info("Healing disk '%s' on %s pool complete", disk, humanize.Ordinal(i+1))
var buf bytes.Buffer
tracker.printTo(&buf)
logger.Info("Summary:\n%s", buf.String())
logger.LogIf(ctx, tracker.delete(ctx))
// Only upon success pop the healed disk.
globalBackgroundHealState.popHealLocalDisks(disk.Endpoint())
}
lbDisks := z.serverPools[i].sets[setIndex].getOnlineDisks()
if err := healErasureSet(ctx, setIndex, buckets, lbDisks); err != nil {
logger.LogIf(ctx, err)
continue
}
logger.Info("Healing disk '%s' on %s zone complete", disk, humanize.Ordinal(i+1))
if err := disk.Delete(ctx, pathJoin(minioMetaBucket, bucketMetaPrefix),
healingTrackerFilename, false); err != nil && !errors.Is(err, errFileNotFound) {
logger.LogIf(ctx, err)
continue
}
// Only upon success pop the healed disk.
globalBackgroundHealState.popHealLocalDisks(disk.Endpoint())
}
}(setIndex, disks)
}
}
wg.Wait()
}
}
}

View File

@@ -30,6 +30,146 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "ID")
return
}
case "PoolIndex":
z.PoolIndex, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "PoolIndex")
return
}
case "SetIndex":
z.SetIndex, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "SetIndex")
return
}
case "DiskIndex":
z.DiskIndex, err = dc.ReadInt()
if err != nil {
err = msgp.WrapError(err, "DiskIndex")
return
}
case "Path":
z.Path, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Endpoint":
z.Endpoint, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Started":
z.Started, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "Started")
return
}
case "LastUpdate":
z.LastUpdate, err = dc.ReadTime()
if err != nil {
err = msgp.WrapError(err, "LastUpdate")
return
}
case "ObjectsHealed":
z.ObjectsHealed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ObjectsHealed")
return
}
case "ObjectsFailed":
z.ObjectsFailed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ObjectsFailed")
return
}
case "BytesDone":
z.BytesDone, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "BytesDone")
return
}
case "BytesFailed":
z.BytesFailed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "BytesFailed")
return
}
case "Bucket":
z.Bucket, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Bucket")
return
}
case "Object":
z.Object, err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "Object")
return
}
case "ResumeObjectsHealed":
z.ResumeObjectsHealed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsHealed")
return
}
case "ResumeObjectsFailed":
z.ResumeObjectsFailed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsFailed")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeBytesDone")
return
}
case "ResumeBytesFailed":
z.ResumeBytesFailed, err = dc.ReadUint64()
if err != nil {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets")
return
}
if cap(z.QueuedBuckets) >= int(zb0002) {
z.QueuedBuckets = (z.QueuedBuckets)[:zb0002]
} else {
z.QueuedBuckets = make([]string, zb0002)
}
for za0001 := range z.QueuedBuckets {
z.QueuedBuckets[za0001], err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets", za0001)
return
}
}
case "HealedBuckets":
var zb0003 uint32
zb0003, err = dc.ReadArrayHeader()
if err != nil {
err = msgp.WrapError(err, "HealedBuckets")
return
}
if cap(z.HealedBuckets) >= int(zb0003) {
z.HealedBuckets = (z.HealedBuckets)[:zb0003]
} else {
z.HealedBuckets = make([]string, zb0003)
}
for za0002 := range z.HealedBuckets {
z.HealedBuckets[za0002], err = dc.ReadString()
if err != nil {
err = msgp.WrapError(err, "HealedBuckets", za0002)
return
}
}
default:
err = dc.Skip()
if err != nil {
@@ -42,10 +182,10 @@ func (z *healingTracker) DecodeMsg(dc *msgp.Reader) (err error) {
}
// EncodeMsg implements msgp.Encodable
func (z healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 1
func (z *healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 20
// write "ID"
err = en.Append(0x81, 0xa2, 0x49, 0x44)
err = en.Append(0xde, 0x0, 0x14, 0xa2, 0x49, 0x44)
if err != nil {
return
}
@@ -54,16 +194,283 @@ func (z healingTracker) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "ID")
return
}
// write "PoolIndex"
err = en.Append(0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
if err != nil {
return
}
err = en.WriteInt(z.PoolIndex)
if err != nil {
err = msgp.WrapError(err, "PoolIndex")
return
}
// write "SetIndex"
err = en.Append(0xa8, 0x53, 0x65, 0x74, 0x49, 0x6e, 0x64, 0x65, 0x78)
if err != nil {
return
}
err = en.WriteInt(z.SetIndex)
if err != nil {
err = msgp.WrapError(err, "SetIndex")
return
}
// write "DiskIndex"
err = en.Append(0xa9, 0x44, 0x69, 0x73, 0x6b, 0x49, 0x6e, 0x64, 0x65, 0x78)
if err != nil {
return
}
err = en.WriteInt(z.DiskIndex)
if err != nil {
err = msgp.WrapError(err, "DiskIndex")
return
}
// write "Path"
err = en.Append(0xa4, 0x50, 0x61, 0x74, 0x68)
if err != nil {
return
}
err = en.WriteString(z.Path)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
// write "Endpoint"
err = en.Append(0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
if err != nil {
return
}
err = en.WriteString(z.Endpoint)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
// write "Started"
err = en.Append(0xa7, 0x53, 0x74, 0x61, 0x72, 0x74, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteTime(z.Started)
if err != nil {
err = msgp.WrapError(err, "Started")
return
}
// write "LastUpdate"
err = en.Append(0xaa, 0x4c, 0x61, 0x73, 0x74, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65)
if err != nil {
return
}
err = en.WriteTime(z.LastUpdate)
if err != nil {
err = msgp.WrapError(err, "LastUpdate")
return
}
// write "ObjectsHealed"
err = en.Append(0xad, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ObjectsHealed)
if err != nil {
err = msgp.WrapError(err, "ObjectsHealed")
return
}
// write "ObjectsFailed"
err = en.Append(0xad, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ObjectsFailed)
if err != nil {
err = msgp.WrapError(err, "ObjectsFailed")
return
}
// write "BytesDone"
err = en.Append(0xa9, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
if err != nil {
return
}
err = en.WriteUint64(z.BytesDone)
if err != nil {
err = msgp.WrapError(err, "BytesDone")
return
}
// write "BytesFailed"
err = en.Append(0xab, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.BytesFailed)
if err != nil {
err = msgp.WrapError(err, "BytesFailed")
return
}
// write "Bucket"
err = en.Append(0xa6, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74)
if err != nil {
return
}
err = en.WriteString(z.Bucket)
if err != nil {
err = msgp.WrapError(err, "Bucket")
return
}
// write "Object"
err = en.Append(0xa6, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74)
if err != nil {
return
}
err = en.WriteString(z.Object)
if err != nil {
err = msgp.WrapError(err, "Object")
return
}
// write "ResumeObjectsHealed"
err = en.Append(0xb3, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeObjectsHealed)
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsHealed")
return
}
// write "ResumeObjectsFailed"
err = en.Append(0xb3, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeObjectsFailed)
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsFailed")
return
}
// write "ResumeBytesDone"
err = en.Append(0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeBytesDone)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesDone")
return
}
// write "ResumeBytesFailed"
err = en.Append(0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
if err != nil {
return
}
err = en.WriteUint64(z.ResumeBytesFailed)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
// write "QueuedBuckets"
err = en.Append(0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteArrayHeader(uint32(len(z.QueuedBuckets)))
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets")
return
}
for za0001 := range z.QueuedBuckets {
err = en.WriteString(z.QueuedBuckets[za0001])
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets", za0001)
return
}
}
// write "HealedBuckets"
err = en.Append(0xad, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
if err != nil {
return
}
err = en.WriteArrayHeader(uint32(len(z.HealedBuckets)))
if err != nil {
err = msgp.WrapError(err, "HealedBuckets")
return
}
for za0002 := range z.HealedBuckets {
err = en.WriteString(z.HealedBuckets[za0002])
if err != nil {
err = msgp.WrapError(err, "HealedBuckets", za0002)
return
}
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
func (z *healingTracker) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 1
// map header, size 20
// string "ID"
o = append(o, 0x81, 0xa2, 0x49, 0x44)
o = append(o, 0xde, 0x0, 0x14, 0xa2, 0x49, 0x44)
o = msgp.AppendString(o, z.ID)
// string "PoolIndex"
o = append(o, 0xa9, 0x50, 0x6f, 0x6f, 0x6c, 0x49, 0x6e, 0x64, 0x65, 0x78)
o = msgp.AppendInt(o, z.PoolIndex)
// string "SetIndex"
o = append(o, 0xa8, 0x53, 0x65, 0x74, 0x49, 0x6e, 0x64, 0x65, 0x78)
o = msgp.AppendInt(o, z.SetIndex)
// string "DiskIndex"
o = append(o, 0xa9, 0x44, 0x69, 0x73, 0x6b, 0x49, 0x6e, 0x64, 0x65, 0x78)
o = msgp.AppendInt(o, z.DiskIndex)
// string "Path"
o = append(o, 0xa4, 0x50, 0x61, 0x74, 0x68)
o = msgp.AppendString(o, z.Path)
// string "Endpoint"
o = append(o, 0xa8, 0x45, 0x6e, 0x64, 0x70, 0x6f, 0x69, 0x6e, 0x74)
o = msgp.AppendString(o, z.Endpoint)
// string "Started"
o = append(o, 0xa7, 0x53, 0x74, 0x61, 0x72, 0x74, 0x65, 0x64)
o = msgp.AppendTime(o, z.Started)
// string "LastUpdate"
o = append(o, 0xaa, 0x4c, 0x61, 0x73, 0x74, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65)
o = msgp.AppendTime(o, z.LastUpdate)
// string "ObjectsHealed"
o = append(o, 0xad, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ObjectsHealed)
// string "ObjectsFailed"
o = append(o, 0xad, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ObjectsFailed)
// string "BytesDone"
o = append(o, 0xa9, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
o = msgp.AppendUint64(o, z.BytesDone)
// string "BytesFailed"
o = append(o, 0xab, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.BytesFailed)
// string "Bucket"
o = append(o, 0xa6, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74)
o = msgp.AppendString(o, z.Bucket)
// string "Object"
o = append(o, 0xa6, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74)
o = msgp.AppendString(o, z.Object)
// string "ResumeObjectsHealed"
o = append(o, 0xb3, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeObjectsHealed)
// string "ResumeObjectsFailed"
o = append(o, 0xb3, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeObjectsFailed)
// string "ResumeBytesDone"
o = append(o, 0xaf, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x44, 0x6f, 0x6e, 0x65)
o = msgp.AppendUint64(o, z.ResumeBytesDone)
// string "ResumeBytesFailed"
o = append(o, 0xb1, 0x52, 0x65, 0x73, 0x75, 0x6d, 0x65, 0x42, 0x79, 0x74, 0x65, 0x73, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64)
o = msgp.AppendUint64(o, z.ResumeBytesFailed)
// string "QueuedBuckets"
o = append(o, 0xad, 0x51, 0x75, 0x65, 0x75, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.QueuedBuckets)))
for za0001 := range z.QueuedBuckets {
o = msgp.AppendString(o, z.QueuedBuckets[za0001])
}
// string "HealedBuckets"
o = append(o, 0xad, 0x48, 0x65, 0x61, 0x6c, 0x65, 0x64, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x73)
o = msgp.AppendArrayHeader(o, uint32(len(z.HealedBuckets)))
for za0002 := range z.HealedBuckets {
o = msgp.AppendString(o, z.HealedBuckets[za0002])
}
return
}
@@ -91,6 +498,146 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "ID")
return
}
case "PoolIndex":
z.PoolIndex, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "PoolIndex")
return
}
case "SetIndex":
z.SetIndex, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "SetIndex")
return
}
case "DiskIndex":
z.DiskIndex, bts, err = msgp.ReadIntBytes(bts)
if err != nil {
err = msgp.WrapError(err, "DiskIndex")
return
}
case "Path":
z.Path, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Path")
return
}
case "Endpoint":
z.Endpoint, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Endpoint")
return
}
case "Started":
z.Started, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Started")
return
}
case "LastUpdate":
z.LastUpdate, bts, err = msgp.ReadTimeBytes(bts)
if err != nil {
err = msgp.WrapError(err, "LastUpdate")
return
}
case "ObjectsHealed":
z.ObjectsHealed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ObjectsHealed")
return
}
case "ObjectsFailed":
z.ObjectsFailed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ObjectsFailed")
return
}
case "BytesDone":
z.BytesDone, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "BytesDone")
return
}
case "BytesFailed":
z.BytesFailed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "BytesFailed")
return
}
case "Bucket":
z.Bucket, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Bucket")
return
}
case "Object":
z.Object, bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "Object")
return
}
case "ResumeObjectsHealed":
z.ResumeObjectsHealed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsHealed")
return
}
case "ResumeObjectsFailed":
z.ResumeObjectsFailed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeObjectsFailed")
return
}
case "ResumeBytesDone":
z.ResumeBytesDone, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesDone")
return
}
case "ResumeBytesFailed":
z.ResumeBytesFailed, bts, err = msgp.ReadUint64Bytes(bts)
if err != nil {
err = msgp.WrapError(err, "ResumeBytesFailed")
return
}
case "QueuedBuckets":
var zb0002 uint32
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets")
return
}
if cap(z.QueuedBuckets) >= int(zb0002) {
z.QueuedBuckets = (z.QueuedBuckets)[:zb0002]
} else {
z.QueuedBuckets = make([]string, zb0002)
}
for za0001 := range z.QueuedBuckets {
z.QueuedBuckets[za0001], bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "QueuedBuckets", za0001)
return
}
}
case "HealedBuckets":
var zb0003 uint32
zb0003, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil {
err = msgp.WrapError(err, "HealedBuckets")
return
}
if cap(z.HealedBuckets) >= int(zb0003) {
z.HealedBuckets = (z.HealedBuckets)[:zb0003]
} else {
z.HealedBuckets = make([]string, zb0003)
}
for za0002 := range z.HealedBuckets {
z.HealedBuckets[za0002], bts, err = msgp.ReadStringBytes(bts)
if err != nil {
err = msgp.WrapError(err, "HealedBuckets", za0002)
return
}
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -104,7 +651,14 @@ func (z *healingTracker) UnmarshalMsg(bts []byte) (o []byte, err error) {
}
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z healingTracker) Msgsize() (s int) {
s = 1 + 3 + msgp.StringPrefixSize + len(z.ID)
func (z *healingTracker) Msgsize() (s int) {
s = 3 + 3 + msgp.StringPrefixSize + len(z.ID) + 10 + msgp.IntSize + 9 + msgp.IntSize + 10 + msgp.IntSize + 5 + msgp.StringPrefixSize + len(z.Path) + 9 + msgp.StringPrefixSize + len(z.Endpoint) + 8 + msgp.TimeSize + 11 + msgp.TimeSize + 14 + msgp.Uint64Size + 14 + msgp.Uint64Size + 10 + msgp.Uint64Size + 12 + msgp.Uint64Size + 7 + msgp.StringPrefixSize + len(z.Bucket) + 7 + msgp.StringPrefixSize + len(z.Object) + 20 + msgp.Uint64Size + 20 + msgp.Uint64Size + 16 + msgp.Uint64Size + 18 + msgp.Uint64Size + 14 + msgp.ArrayHeaderSize
for za0001 := range z.QueuedBuckets {
s += msgp.StringPrefixSize + len(z.QueuedBuckets[za0001])
}
s += 14 + msgp.ArrayHeaderSize
for za0002 := range z.HealedBuckets {
s += msgp.StringPrefixSize + len(z.HealedBuckets[za0002])
}
return
}

View File

@@ -19,7 +19,6 @@ package cmd
import (
"bytes"
"context"
"io/ioutil"
"math"
"math/rand"
"strconv"
@@ -55,7 +54,7 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
for i := 0; i < b.N; i++ {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
}
@@ -114,7 +113,7 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
md5hex := getMD5Hash([]byte(textPartData))
var partInfo PartInfo
partInfo, err = obj.PutObjectPart(context.Background(), bucket, object, uploadID, j,
mustGetPutObjReader(b, bytes.NewBuffer(textPartData), int64(len(textPartData)), md5hex, sha256hex), ObjectOptions{})
mustGetPutObjReader(b, bytes.NewReader(textPartData), int64(len(textPartData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
}
@@ -175,56 +174,6 @@ func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int)
runPutObjectBenchmarkParallel(b, objLayer, objSize)
}
// Benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
textData := generateBytesData(objSize)
// generate etag for the generated data.
// etag of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/Erasure backend.
// get text data generated for number of bytes equal to object size.
md5hex := getMD5Hash(textData)
sha256hex := ""
for i := 0; i < 10; i++ {
// insert the object.
var objInfo ObjectInfo
objInfo, err = obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, md5hex)
}
}
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for GetObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
var buffer = new(bytes.Buffer)
err = obj.GetObject(context.Background(), bucket, "object"+strconv.Itoa(i%10), 0, int64(objSize), buffer, "", ObjectOptions{})
if err != nil {
b.Error(err)
}
}
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// randomly picks a character and returns its equivalent byte array.
func getRandomByte() []byte {
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
@@ -240,38 +189,6 @@ func generateBytesData(size int) []byte {
return bytes.Repeat(getRandomByte(), size)
}
// creates Erasure/FS backend setup, obtains the object layer and calls the runGetObjectBenchmark function.
func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runGetObjectBenchmark(b, objLayer, objSize)
}
// creates Erasure/FS backend setup, obtains the object layer and runs parallel benchmark for ObjectLayer.GetObject() .
func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp Erasure/FS backend.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
objLayer, disks, err := prepareTestBackend(ctx, instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runGetObjectBenchmarkParallel(b, objLayer, objSize)
}
// Parallel benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
@@ -301,7 +218,7 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
for pb.Next() {
// insert the object.
objInfo, err := obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
}
@@ -315,58 +232,3 @@ func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// Parallel benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err := obj.MakeBucketWithLocation(context.Background(), bucket, BucketOptions{})
if err != nil {
b.Fatal(err)
}
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/Erasure backend.
md5hex := getMD5Hash([]byte(textData))
sha256hex := ""
for i := 0; i < 10; i++ {
// insert the object.
var objInfo ObjectInfo
objInfo, err = obj.PutObject(context.Background(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewBuffer(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
if err != nil {
b.Fatal(err)
}
if objInfo.ETag != md5hex {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, objInfo.ETag, md5hex)
}
}
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for GetObject starts here. Reset the benchmark timer.
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
i := 0
for pb.Next() {
err = obj.GetObject(context.Background(), bucket, "object"+strconv.Itoa(i), 0, int64(objSize), ioutil.Discard, "", ObjectOptions{})
if err != nil {
b.Error(err)
}
i++
if i == 10 {
i = 0
}
}
})
// Benchmark ends here. Stop timer.
b.StopTimer()
}

View File

@@ -37,7 +37,7 @@ func (err *errHashMismatch) Error() string {
// Calculates bitrot in chunks and writes the hash into the stream.
type streamingBitrotWriter struct {
iow *io.PipeWriter
iow io.WriteCloser
h hash.Hash
shardSize int64
canClose chan struct{} // Needed to avoid race explained in Close() call.
@@ -71,9 +71,10 @@ func (b *streamingBitrotWriter) Close() error {
}
// Returns streaming bitrot writer implementation.
func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.WriteCloser {
func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64, heal bool) io.Writer {
r, w := io.Pipe()
h := algo.New()
bw := &streamingBitrotWriter{w, h, shardSize, make(chan struct{})}
go func() {
totalFileSize := int64(-1) // For compressed objects length will be unknown (represented by length=-1)
@@ -81,8 +82,7 @@ func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length i
bitrotSumsTotalSize := ceilFrac(length, shardSize) * int64(h.Size()) // Size used for storing bitrot checksums.
totalFileSize = bitrotSumsTotalSize + length
}
err := disk.CreateFile(context.TODO(), volume, filePath, totalFileSize, r)
r.CloseWithError(err)
r.CloseWithError(disk.CreateFile(context.TODO(), volume, filePath, totalFileSize, r))
close(bw.canClose)
}()
return bw
@@ -91,7 +91,8 @@ func newStreamingBitrotWriter(disk StorageAPI, volume, filePath string, length i
// ReadAt() implementation which verifies the bitrot hash available as part of the stream.
type streamingBitrotReader struct {
disk StorageAPI
rc io.ReadCloser
data []byte
rc io.Reader
volume string
filePath string
tillOffset int64
@@ -105,7 +106,10 @@ func (b *streamingBitrotReader) Close() error {
if b.rc == nil {
return nil
}
return b.rc.Close()
if closer, ok := b.rc.(io.Closer); ok {
return closer.Close()
}
return nil
}
func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
@@ -119,11 +123,16 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
// For the first ReadAt() call we need to open the stream for reading.
b.currOffset = offset
streamOffset := (offset/b.shardSize)*int64(b.h.Size()) + offset
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
if len(b.data) == 0 {
b.rc, err = b.disk.ReadFileStream(context.TODO(), b.volume, b.filePath, streamOffset, b.tillOffset-streamOffset)
} else {
b.rc = io.NewSectionReader(bytes.NewReader(b.data), streamOffset, b.tillOffset-streamOffset)
}
if err != nil {
return 0, err
}
}
if offset != b.currOffset {
// Can never happen unless there are programmer bugs
return 0, errUnexpected
@@ -140,20 +149,20 @@ func (b *streamingBitrotReader) ReadAt(buf []byte, offset int64) (int, error) {
b.h.Write(buf)
if !bytes.Equal(b.h.Sum(nil), b.hashBytes) {
err := &errHashMismatch{fmt.Sprintf("Disk: %s -> %s/%s - content hash does not match - expected %s, got %s",
b.disk, b.volume, b.filePath, hex.EncodeToString(b.hashBytes), hex.EncodeToString(b.h.Sum(nil)))}
logger.LogIf(GlobalContext, err)
return 0, err
logger.LogIf(GlobalContext, fmt.Errorf("Disk: %s -> %s/%s - content hash does not match - expected %s, got %s",
b.disk, b.volume, b.filePath, hex.EncodeToString(b.hashBytes), hex.EncodeToString(b.h.Sum(nil))))
return 0, errFileCorrupt
}
b.currOffset += int64(len(buf))
return len(buf), nil
}
// Returns streaming bitrot reader implementation.
func newStreamingBitrotReader(disk StorageAPI, volume, filePath string, tillOffset int64, algo BitrotAlgorithm, shardSize int64) *streamingBitrotReader {
func newStreamingBitrotReader(disk StorageAPI, data []byte, volume, filePath string, tillOffset int64, algo BitrotAlgorithm, shardSize int64) *streamingBitrotReader {
h := algo.New()
return &streamingBitrotReader{
disk,
data,
nil,
volume,
filePath,

View File

@@ -17,13 +17,13 @@
package cmd
import (
"crypto/sha256"
"errors"
"hash"
"io"
"github.com/minio/highwayhash"
"github.com/minio/minio/cmd/logger"
sha256 "github.com/minio/sha256-simd"
"golang.org/x/crypto/blake2b"
)
@@ -96,16 +96,16 @@ func BitrotAlgorithmFromString(s string) (a BitrotAlgorithm) {
return
}
func newBitrotWriter(disk StorageAPI, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {
func newBitrotWriter(disk StorageAPI, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64, heal bool) io.Writer {
if algo == HighwayHash256S {
return newStreamingBitrotWriter(disk, volume, filePath, length, algo, shardSize)
return newStreamingBitrotWriter(disk, volume, filePath, length, algo, shardSize, heal)
}
return newWholeBitrotWriter(disk, volume, filePath, algo, shardSize)
}
func newBitrotReader(disk StorageAPI, bucket string, filePath string, tillOffset int64, algo BitrotAlgorithm, sum []byte, shardSize int64) io.ReaderAt {
func newBitrotReader(disk StorageAPI, data []byte, bucket string, filePath string, tillOffset int64, algo BitrotAlgorithm, sum []byte, shardSize int64) io.ReaderAt {
if algo == HighwayHash256S {
return newStreamingBitrotReader(disk, bucket, filePath, tillOffset, algo, shardSize)
return newStreamingBitrotReader(disk, data, bucket, filePath, tillOffset, algo, shardSize)
}
return newWholeBitrotReader(disk, bucket, filePath, algo, tillOffset, sum)
}

View File

@@ -20,7 +20,6 @@ import (
"context"
"io"
"io/ioutil"
"log"
"os"
"testing"
)
@@ -28,7 +27,7 @@ import (
func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
tmpDir, err := ioutil.TempDir("", "")
if err != nil {
log.Fatal(err)
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
@@ -42,39 +41,39 @@ func testBitrotReaderWriterAlgo(t *testing.T, bitrotAlgo BitrotAlgorithm) {
disk.MakeVol(context.Background(), volume)
writer := newBitrotWriter(disk, volume, filePath, 35, bitrotAlgo, 10)
writer := newBitrotWriter(disk, volume, filePath, 35, bitrotAlgo, 10, false)
_, err = writer.Write([]byte("aaaaaaaaaa"))
if err != nil {
log.Fatal(err)
t.Fatal(err)
}
_, err = writer.Write([]byte("aaaaaaaaaa"))
if err != nil {
log.Fatal(err)
t.Fatal(err)
}
_, err = writer.Write([]byte("aaaaaaaaaa"))
if err != nil {
log.Fatal(err)
t.Fatal(err)
}
_, err = writer.Write([]byte("aaaaa"))
if err != nil {
log.Fatal(err)
t.Fatal(err)
}
writer.(io.Closer).Close()
reader := newBitrotReader(disk, volume, filePath, 35, bitrotAlgo, bitrotWriterSum(writer), 10)
reader := newBitrotReader(disk, nil, volume, filePath, 35, bitrotAlgo, bitrotWriterSum(writer), 10)
b := make([]byte, 10)
if _, err = reader.ReadAt(b, 0); err != nil {
log.Fatal(err)
t.Fatal(err)
}
if _, err = reader.ReadAt(b, 10); err != nil {
log.Fatal(err)
t.Fatal(err)
}
if _, err = reader.ReadAt(b, 20); err != nil {
log.Fatal(err)
t.Fatal(err)
}
if _, err = reader.ReadAt(b[:5], 30); err != nil {
log.Fatal(err)
t.Fatal(err)
}
}

View File

@@ -37,7 +37,7 @@ const (
func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketEncryption")
defer logger.AuditLog(w, r, "PutBucketEncryption", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -102,7 +102,7 @@ func (api objectAPIHandlers) PutBucketEncryptionHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) GetBucketEncryptionHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketEncryption")
defer logger.AuditLog(w, r, "GetBucketEncryption", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -145,7 +145,7 @@ func (api objectAPIHandlers) GetBucketEncryptionHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) DeleteBucketEncryptionHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketEncryption")
defer logger.AuditLog(w, r, "DeleteBucketEncryption", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {

View File

@@ -17,23 +17,26 @@
package cmd
import (
"bytes"
"crypto/subtle"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
"net/textproto"
"net/url"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/minio/minio-go/v7/pkg/set"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/cmd/config/dns"
"github.com/minio/minio/cmd/crypto"
xhttp "github.com/minio/minio/cmd/http"
@@ -42,7 +45,6 @@ import (
objectlock "github.com/minio/minio/pkg/bucket/object/lock"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/bucket/replication"
"github.com/minio/minio/pkg/env"
"github.com/minio/minio/pkg/event"
"github.com/minio/minio/pkg/handlers"
"github.com/minio/minio/pkg/hash"
@@ -74,7 +76,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
// Get buckets in the DNS
dnsBuckets, err := globalDNSConfig.List()
if err != nil && err != dns.ErrNoEntriesFound && err != dns.ErrNotImplemented {
if err != nil && !IsErrIgnored(err, dns.ErrNoEntriesFound, dns.ErrNotImplemented, dns.ErrDomainMissing) {
logger.LogIf(GlobalContext, err)
return
}
@@ -82,6 +84,10 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
bucketsSet := set.NewStringSet()
bucketsToBeUpdated := set.NewStringSet()
bucketsInConflict := set.NewStringSet()
// This means that domain is updated, we should update
// all bucket entries with new domain name.
domainMissing := err == dns.ErrDomainMissing
if dnsBuckets != nil {
for _, bucket := range buckets {
bucketsSet.Add(bucket.Name)
@@ -91,11 +97,16 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
continue
}
if !globalDomainIPs.Intersection(set.CreateStringSet(getHostsSlice(r)...)).IsEmpty() {
if globalDomainIPs.Difference(set.CreateStringSet(getHostsSlice(r)...)).IsEmpty() {
if globalDomainIPs.Difference(set.CreateStringSet(getHostsSlice(r)...)).IsEmpty() && !domainMissing {
// No difference in terms of domainIPs and nothing
// has changed so we don't change anything on the etcd.
//
// Additionally also check if domain is updated/missing with more
// entries, if that is the case we should update the
// new domain entries as well.
continue
}
// if domain IPs intersect then it won't be an empty set.
// such an intersection means that bucket exists on etcd.
// but if we do see a difference with local domain IPs with
@@ -104,6 +115,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
bucketsToBeUpdated.Add(bucket.Name)
continue
}
// No IPs seem to intersect, this means that bucket exists but has
// different IP addresses perhaps from a different deployment.
// bucket names are globally unique in federation at a given
@@ -114,8 +126,11 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
}
// Add/update buckets that are not registered with the DNS
g := errgroup.WithNErrs(len(buckets))
bucketsToBeUpdatedSlice := bucketsToBeUpdated.ToSlice()
g := errgroup.WithNErrs(len(bucketsToBeUpdatedSlice)).WithConcurrency(50)
ctx, cancel := g.WithCancelOnError(GlobalContext)
defer cancel()
for index := range bucketsToBeUpdatedSlice {
index := index
g.Go(func() error {
@@ -123,16 +138,16 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
}, index)
}
for _, err := range g.Wait() {
if err != nil {
logger.LogIf(GlobalContext, err)
}
if err := g.WaitErr(); err != nil {
logger.LogIf(ctx, err)
return
}
for _, bucket := range bucketsInConflict.ToSlice() {
logger.LogIf(GlobalContext, fmt.Errorf("Unable to add bucket DNS entry for bucket %s, an entry exists for the same bucket. Use one of these IP addresses %v to access the bucket", bucket, globalDomainIPs.ToSlice()))
logger.LogIf(ctx, fmt.Errorf("Unable to add bucket DNS entry for bucket %s, an entry exists for the same bucket by a different tenant. This local bucket will be ignored. Bucket names are globally unique in federated deployments. Use path style requests on following addresses '%v' to access this bucket", bucket, globalDomainIPs.ToSlice()))
}
var wg sync.WaitGroup
// Remove buckets that are in DNS for this server, but aren't local
for bucket, records := range dnsBuckets {
if bucketsSet.Contains(bucket) {
@@ -144,13 +159,18 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
continue
}
// We go to here, so we know the bucket no longer exists,
// but is registered in DNS to this server
if err = globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(GlobalContext, fmt.Errorf("Failed to remove DNS entry for %s due to %w",
bucket, err))
}
wg.Add(1)
go func(bucket string) {
defer wg.Done()
// We go to here, so we know the bucket no longer exists,
// but is registered in DNS to this server
if err := globalDNSConfig.Delete(bucket); err != nil {
logger.LogIf(GlobalContext, fmt.Errorf("Failed to remove DNS entry for %s due to %w",
bucket, err))
}
}(bucket)
}
wg.Wait()
}
// GetBucketLocationHandler - GET Bucket location.
@@ -159,7 +179,7 @@ func initFederatorBackend(buckets []BucketInfo, objLayer ObjectLayer) {
func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketLocation")
defer logger.AuditLog(w, r, "GetBucketLocation", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -207,7 +227,7 @@ func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *
func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListMultipartUploads")
defer logger.AuditLog(w, r, "ListMultipartUploads", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -262,7 +282,7 @@ func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter,
func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListBuckets")
defer logger.AuditLog(w, r, "ListBuckets", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI := api.ObjectAPI()
if objectAPI == nil {
@@ -272,7 +292,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
listBuckets := objectAPI.ListBuckets
accessKey, owner, s3Error := checkRequestAuthTypeToAccessKey(ctx, r, policy.ListAllMyBucketsAction, "", "")
cred, owner, s3Error := checkRequestAuthTypeCredential(ctx, r, policy.ListAllMyBucketsAction, "", "")
if s3Error != ErrNone && s3Error != ErrAccessDenied {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
@@ -282,7 +302,9 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
var bucketsInfo []BucketInfo
if globalDNSConfig != nil && globalBucketFederation {
dnsBuckets, err := globalDNSConfig.List()
if err != nil && err != dns.ErrNoEntriesFound {
if err != nil && !IsErrIgnored(err,
dns.ErrNoEntriesFound,
dns.ErrDomainMissing) {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
@@ -292,6 +314,11 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
Created: dnsRecords[0].CreationDate,
})
}
sort.Slice(bucketsInfo, func(i, j int) bool {
return bucketsInfo[i].Name < bucketsInfo[j].Name
})
} else {
// Invoke the list buckets.
var err error
@@ -311,16 +338,17 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
// err will be nil here as we already called this function
// earlier in this request.
claims, _ := getClaimsFromToken(r, getSessionToken(r))
claims, _ := getClaimsFromToken(getSessionToken(r))
n := 0
// Use the following trick to filter in place
// https://github.com/golang/go/wiki/SliceTricks#filter-in-place
for _, bucketInfo := range bucketsInfo {
if globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: accessKey,
AccountName: cred.AccessKey,
Groups: cred.Groups,
Action: iampolicy.ListBucketAction,
BucketName: bucketInfo.Name,
ConditionValues: getConditionValues(r, "", accessKey, claims),
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
IsOwner: owner,
ObjectName: "",
Claims: claims,
@@ -349,7 +377,7 @@ func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.R
func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteMultipleObjects")
defer logger.AuditLog(w, r, "DeleteMultipleObjects", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -385,6 +413,11 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
return
}
// Convert object name delete objects if it has `/` in the beginning.
for i := range deleteObjects.Objects {
deleteObjects.Objects[i].ObjectName = trimLeadingSlash(deleteObjects.Objects[i].ObjectName)
}
// Call checkRequestAuthType to populate ReqInfo.AccessKey before GetBucketInfo()
// Ignore errors here to preserve the S3 error behavior of GetBucketInfo()
checkRequestAuthType(ctx, r, policy.DeleteObjectAction, bucket, "")
@@ -401,15 +434,21 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deleteObjectsFn = api.CacheAPI().DeleteObjects
}
// Return Malformed XML as S3 spec if the list of objects is empty
if len(deleteObjects.Objects) == 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedXML), r.URL, guessIsBrowserReq(r))
return
}
var objectsToDelete = map[ObjectToDelete]int{}
getObjectInfoFn := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
getObjectInfoFn = api.CacheAPI().GetObjectInfo
}
var (
hasLockEnabled, hasLifecycleConfig bool
goi ObjectInfo
gerr error
hasLockEnabled, hasLifecycleConfig, replicateSync bool
goi ObjectInfo
gerr error
)
replicateDeletes := hasReplicationRules(ctx, bucket, deleteObjects.Objects)
if rcfg, _ := globalBucketObjectLockSys.Get(bucket); rcfg.LockEnabled {
@@ -457,16 +496,21 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
object.PurgeTransitioned = goi.TransitionStatus
}
if replicateDeletes {
delMarker, replicate := checkReplicateDelete(ctx, bucket, ObjectToDelete{
replicate, repsync := checkReplicateDelete(ctx, bucket, ObjectToDelete{
ObjectName: object.ObjectName,
VersionID: object.VersionID,
}, goi, gerr)
replicateSync = repsync
if replicate {
if apiErrCode := checkRequestAuthType(ctx, r, policy.ReplicateDeleteAction, bucket, object.ObjectName); apiErrCode != ErrNone {
if apiErrCode == ErrSignatureDoesNotMatch || apiErrCode == ErrInvalidAccessKeyID {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(apiErrCode), r.URL, guessIsBrowserReq(r))
return
}
continue
}
if object.VersionID != "" {
object.VersionPurgeStatus = Pending
if delMarker {
object.DeleteMarkerVersionID = object.VersionID
}
} else {
object.DeleteMarkerReplicationStatus = string(replication.Pending)
}
@@ -510,14 +554,18 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
})
deletedObjects := make([]DeletedObject, len(deleteObjects.Objects))
for i := range errs {
dindex := objectsToDelete[ObjectToDelete{
// DeleteMarkerVersionID is not used specifically to avoid
// lookup errors, since DeleteMarkerVersionID is only
// created during DeleteMarker creation when client didn't
// specify a versionID.
objToDel := ObjectToDelete{
ObjectName: dObjects[i].ObjectName,
VersionID: dObjects[i].VersionID,
DeleteMarkerVersionID: dObjects[i].DeleteMarkerVersionID,
VersionPurgeStatus: dObjects[i].VersionPurgeStatus,
DeleteMarkerReplicationStatus: dObjects[i].DeleteMarkerReplicationStatus,
PurgeTransitioned: dObjects[i].PurgeTransitioned,
}]
}
dindex := objectsToDelete[objToDel]
if errs[i] == nil || isErrObjectNotFound(errs[i]) || isErrVersionNotFound(errs[i]) {
if replicateDeletes {
dObjects[i].DeleteMarkerReplicationStatus = deleteList[i].DeleteMarkerReplicationStatus
@@ -549,39 +597,36 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
// Write success response.
writeSuccessResponseXML(w, encodedSuccessResponse)
for _, dobj := range deletedObjects {
if dobj.ObjectName == "" {
continue
}
if replicateDeletes {
if dobj.DeleteMarkerReplicationStatus == string(replication.Pending) || dobj.VersionPurgeStatus == Pending {
globalReplicationState.queueReplicaDeleteTask(DeletedObjectVersionInfo{
dv := DeletedObjectVersionInfo{
DeletedObject: dobj,
Bucket: bucket,
})
}
scheduleReplicationDelete(ctx, dv, objectAPI, replicateSync)
}
}
if hasLifecycleConfig && dobj.PurgeTransitioned == lifecycle.TransitionComplete { // clean up transitioned tier
action := lifecycle.DeleteAction
if dobj.VersionID != "" {
action = lifecycle.DeleteVersionAction
}
deleteTransitionedObject(ctx, newObjectLayerFn(), bucket, dobj.ObjectName, lifecycle.ObjectOpts{
deleteTransitionedObject(ctx, objectAPI, bucket, dobj.ObjectName, lifecycle.ObjectOpts{
Name: dobj.ObjectName,
VersionID: dobj.VersionID,
DeleteMarker: dobj.DeleteMarker,
}, action, true)
}, false, true)
}
}
// Notify deleted event for objects.
for _, dobj := range deletedObjects {
eventName := event.ObjectRemovedDelete
objInfo := ObjectInfo{
Name: dobj.ObjectName,
VersionID: dobj.VersionID,
Name: dobj.ObjectName,
VersionID: dobj.VersionID,
DeleteMarker: dobj.DeleteMarker,
}
if dobj.DeleteMarker {
objInfo.DeleteMarker = dobj.DeleteMarker
if objInfo.DeleteMarker {
objInfo.VersionID = dobj.DeleteMarkerVersionID
eventName = event.ObjectRemovedDeleteMarkerCreated
}
@@ -604,7 +649,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucket")
defer logger.AuditLog(w, r, "PutBucket", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI := api.ObjectAPI()
if objectAPI == nil {
@@ -735,7 +780,7 @@ func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Req
func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PostPolicyBucket")
defer logger.AuditLog(w, r, "PostPolicyBucket", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI := api.ObjectAPI()
if objectAPI == nil {
@@ -748,7 +793,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if !objectAPI.IsEncryptionSupported() && crypto.IsRequested(r.Header) {
if _, ok := crypto.IsRequested(r.Header); !objectAPI.IsEncryptionSupported() && ok {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL, guessIsBrowserReq(r))
return
}
@@ -761,13 +806,15 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMissingContentLength), r.URL, guessIsBrowserReq(r))
return
}
resource, err := getResource(r.URL.Path, r.Host, globalDomainNames)
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidRequest), r.URL, guessIsBrowserReq(r))
return
}
// Make sure that the URL does not contain object name.
if bucket != filepath.Clean(resource[1:]) {
// Make sure that the URL does not contain object name.
if bucket != path.Clean(resource[1:]) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMethodNotAllowed), r.URL, guessIsBrowserReq(r))
return
}
@@ -810,13 +857,12 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
defer fileBody.Close()
formValues.Set("Bucket", bucket)
if fileName != "" && strings.Contains(formValues.Get("Key"), "${filename}") {
// S3 feature to replace ${filename} found in Key form field
// by the filename attribute passed in multipart
formValues.Set("Key", strings.Replace(formValues.Get("Key"), "${filename}", fileName, -1))
}
object := formValues.Get("Key")
object := trimLeadingSlash(formValues.Get("Key"))
successRedirect := formValues.Get("success_action_redirect")
successStatus := formValues.Get("success_action_status")
@@ -830,12 +876,51 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
// Verify policy signature.
errCode := doesPolicySignatureMatch(formValues)
cred, errCode := doesPolicySignatureMatch(formValues)
if errCode != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(errCode), r.URL, guessIsBrowserReq(r))
return
}
// Once signature is validated, check if the user has
// explicit permissions for the user.
{
token := formValues.Get(xhttp.AmzSecurityToken)
if token != "" && cred.AccessKey == "" {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrNoAccessKey), r.URL, guessIsBrowserReq(r))
return
}
if cred.IsServiceAccount() && token == "" {
token = cred.SessionToken
}
if subtle.ConstantTimeCompare([]byte(token), []byte(cred.SessionToken)) != 1 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrInvalidToken), r.URL, guessIsBrowserReq(r))
return
}
// Extract claims if any.
claims, err := getClaimsFromToken(token)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
if !globalIAMSys.IsAllowed(iampolicy.Args{
AccountName: cred.AccessKey,
Action: iampolicy.PutObjectAction,
ConditionValues: getConditionValues(r, "", cred.AccessKey, claims),
BucketName: bucket,
ObjectName: object,
IsOwner: globalActiveCred.AccessKey == cred.AccessKey,
Claims: claims,
}) {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrAccessDenied), r.URL, guessIsBrowserReq(r))
return
}
}
policyBytes, err := base64.StdEncoding.DecodeString(formValues.Get("Policy"))
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedPOSTRequest), r.URL, guessIsBrowserReq(r))
@@ -844,10 +929,11 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Handle policy if it is set.
if len(policyBytes) > 0 {
postPolicyForm, err := parsePostPolicyForm(string(policyBytes))
postPolicyForm, err := parsePostPolicyForm(bytes.NewReader(policyBytes))
if err != nil {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrPostPolicyConditionInvalidFormat), r.URL, guessIsBrowserReq(r))
errAPI := errorCodes.ToAPIErr(ErrPostPolicyConditionInvalidFormat)
errAPI.Description = fmt.Sprintf("%s '(%s)'", errAPI.Description, err)
writeErrorResponse(ctx, w, errAPI, r.URL, guessIsBrowserReq(r))
return
}
@@ -875,27 +961,27 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Extract metadata to be saved from received Form.
metadata := make(map[string]string)
err = extractMetadataFromMap(ctx, formValues, metadata)
err = extractMetadataFromMime(ctx, textproto.MIMEHeader(formValues), metadata)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
hashReader, err := hash.NewReader(fileBody, fileSize, "", "", fileSize, globalCLIContext.StrictS3Compat)
hashReader, err := hash.NewReader(fileBody, fileSize, "", "", fileSize)
if err != nil {
logger.LogIf(ctx, err)
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
rawReader := hashReader
pReader := NewPutObjReader(rawReader, nil, nil)
pReader := NewPutObjReader(rawReader)
var objectEncryptionKey crypto.ObjectKey
// Check if bucket encryption is enabled
if _, err = globalBucketSSEConfigSys.Get(bucket); err == nil || globalAutoEncryption {
// This request header needs to be set prior to setting ObjectOptions
if !crypto.SSEC.IsRequested(r.Header) {
r.Header.Set(crypto.SSEHeader, crypto.SSEAlgorithmAES256)
r.Header.Set(xhttp.AmzServerSideEncryption, xhttp.AmzEncryptionAES)
}
}
@@ -907,7 +993,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
return
}
if objectAPI.IsEncryptionSupported() {
if crypto.IsRequested(formValues) && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if _, ok := crypto.IsRequested(formValues); ok && !HasSuffix(object, SlashSeparator) { // handle SSE requests
if crypto.SSECopy.IsRequested(r.Header) {
writeErrorResponse(ctx, w, toAPIError(ctx, errInvalidEncryptionParameters), r.URL, guessIsBrowserReq(r))
return
@@ -928,12 +1014,16 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
info := ObjectInfo{Size: fileSize}
// do not try to verify encrypted content
hashReader, err = hash.NewReader(reader, info.EncryptedSize(), "", "", fileSize, globalCLIContext.StrictS3Compat)
hashReader, err = hash.NewReader(reader, info.EncryptedSize(), "", "", fileSize)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
pReader, err = pReader.WithEncryption(hashReader, &objectEncryptionKey)
if err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
pReader = NewPutObjReader(rawReader, hashReader, &objectEncryptionKey)
}
}
@@ -990,6 +1080,64 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
}
}
// GetBucketPolicyStatusHandler - Retrieves the policy status
// for an MinIO bucket, indicating whether the bucket is public.
func (api objectAPIHandlers) GetBucketPolicyStatusHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketPolicyStatus")
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
objectAPI := api.ObjectAPI()
if objectAPI == nil {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(ErrServerNotInitialized))
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.GetBucketPolicyStatusAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
}
// Check if bucket exists.
if _, err := objectAPI.GetBucketInfo(ctx, bucket); err != nil {
writeErrorResponse(ctx, w, toAPIError(ctx, err), r.URL, guessIsBrowserReq(r))
return
}
// Check if anonymous (non-owner) has access to list objects.
readable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.ListBucketAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
IsOwner: false,
})
// Check if anonymous (non-owner) has access to upload objects.
writable := globalPolicySys.IsAllowed(policy.Args{
Action: policy.PutObjectAction,
BucketName: bucket,
ConditionValues: getConditionValues(r, "", "", nil),
IsOwner: false,
})
encodedSuccessResponse := encodeResponse(PolicyStatus{
IsPublic: func() string {
// Silly to have special 'boolean' values yes
// but complying with silly implementation
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html
if readable && writable {
return "TRUE"
}
return "FALSE"
}(),
})
writeSuccessResponseXML(w, encodedSuccessResponse)
}
// HeadBucketHandler - HEAD Bucket
// ----------
// This operation is useful to determine if a bucket exists.
@@ -999,7 +1147,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "HeadBucket")
defer logger.AuditLog(w, r, "HeadBucket", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1029,7 +1177,7 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucket")
defer logger.AuditLog(w, r, "DeleteBucket", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1117,7 +1265,7 @@ func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.
func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketObjectLockConfig")
defer logger.AuditLog(w, r, "PutBucketObjectLockConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1173,7 +1321,7 @@ func (api objectAPIHandlers) PutBucketObjectLockConfigHandler(w http.ResponseWri
func (api objectAPIHandlers) GetBucketObjectLockConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketObjectLockConfig")
defer logger.AuditLog(w, r, "GetBucketObjectLockConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1211,7 +1359,7 @@ func (api objectAPIHandlers) GetBucketObjectLockConfigHandler(w http.ResponseWri
func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketTagging")
defer logger.AuditLog(w, r, "PutBucketTagging", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1255,7 +1403,7 @@ func (api objectAPIHandlers) PutBucketTaggingHandler(w http.ResponseWriter, r *h
func (api objectAPIHandlers) GetBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketTagging")
defer logger.AuditLog(w, r, "GetBucketTagging", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1293,7 +1441,7 @@ func (api objectAPIHandlers) GetBucketTaggingHandler(w http.ResponseWriter, r *h
func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketTagging")
defer logger.AuditLog(w, r, "DeleteBucketTagging", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1323,7 +1471,7 @@ func (api objectAPIHandlers) DeleteBucketTaggingHandler(w http.ResponseWriter, r
// Add a replication configuration on the specified bucket as specified in https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html
func (api objectAPIHandlers) PutBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketReplicationConfig")
defer logger.AuditLog(w, r, "PutBucketReplicationConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1336,11 +1484,6 @@ func (api objectAPIHandlers) PutBucketReplicationConfigHandler(w http.ResponseWr
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrNotImplemented), r.URL)
return
}
// Turn off replication if disk crawl is unavailable.
if env.Get(envDataUsageCrawlConf, config.EnableOn) == config.EnableOff {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrBucketReplicationDisabledError), r.URL)
return
}
if s3Error := checkRequestAuthType(ctx, r, policy.PutReplicationConfigurationAction, bucket, ""); s3Error != ErrNone {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(s3Error), r.URL, guessIsBrowserReq(r))
return
@@ -1392,7 +1535,7 @@ func (api objectAPIHandlers) PutBucketReplicationConfigHandler(w http.ResponseWr
func (api objectAPIHandlers) GetBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketReplicationConfig")
defer logger.AuditLog(w, r, "GetBucketReplicationConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -1433,7 +1576,7 @@ func (api objectAPIHandlers) GetBucketReplicationConfigHandler(w http.ResponseWr
// ----------
func (api objectAPIHandlers) DeleteBucketReplicationConfigHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketReplicationConfig")
defer logger.AuditLog(w, r, "DeleteBucketReplicationConfig", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]

View File

@@ -35,7 +35,7 @@ func TestRemoveBucketHandler(t *testing.T) {
func testRemoveBucketHandler(obj ObjectLayer, instanceType, bucketName string, apiRouter http.Handler,
credentials auth.Credentials, t *testing.T) {
_, err := obj.PutObject(GlobalContext, bucketName, "test-object", mustGetPutObjReader(t, bytes.NewBuffer([]byte{}), int64(0), "", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"), ObjectOptions{})
_, err := obj.PutObject(GlobalContext, bucketName, "test-object", mustGetPutObjReader(t, bytes.NewReader([]byte{}), int64(0), "", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"), ObjectOptions{})
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Error uploading object: <ERROR> %v", err)
@@ -669,7 +669,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
for i := 0; i < 10; i++ {
objectName := "test-object-" + strconv.Itoa(i)
// uploading the object.
_, err = obj.PutObject(GlobalContext, bucketName, objectName, mustGetPutObjReader(t, bytes.NewBuffer(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
_, err = obj.PutObject(GlobalContext, bucketName, objectName, mustGetPutObjReader(t, bytes.NewReader(contentBytes), int64(len(contentBytes)), "", sha256sum), ObjectOptions{})
// if object upload fails stop the test.
if err != nil {
t.Fatalf("Put Object %d: Error uploading object: <ERROR> %v", i, err)

View File

@@ -38,7 +38,7 @@ const (
func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketLifecycle")
defer logger.AuditLog(w, r, "PutBucketLifecycle", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -103,7 +103,7 @@ func (api objectAPIHandlers) PutBucketLifecycleHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketLifecycle")
defer logger.AuditLog(w, r, "GetBucketLifecycle", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -145,7 +145,7 @@ func (api objectAPIHandlers) GetBucketLifecycleHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) DeleteBucketLifecycleHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketLifecycle")
defer logger.AuditLog(w, r, "DeleteBucketLifecycle", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {

View File

@@ -28,7 +28,6 @@ import (
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/crypto"
xhttp "github.com/minio/minio/cmd/http"
"github.com/minio/minio/cmd/logger"
sse "github.com/minio/minio/pkg/bucket/encryption"
@@ -66,6 +65,46 @@ func NewLifecycleSys() *LifecycleSys {
return &LifecycleSys{}
}
type expiryTask struct {
objInfo ObjectInfo
versionExpiry bool
}
type expiryState struct {
expiryCh chan expiryTask
}
func (es *expiryState) queueExpiryTask(oi ObjectInfo, rmVersion bool) {
select {
case es.expiryCh <- expiryTask{objInfo: oi, versionExpiry: rmVersion}:
default:
}
}
var (
globalExpiryState *expiryState
)
func newExpiryState() *expiryState {
es := &expiryState{
expiryCh: make(chan expiryTask, 10000),
}
go func() {
<-GlobalContext.Done()
close(es.expiryCh)
}()
return es
}
func initBackgroundExpiry(ctx context.Context, objectAPI ObjectLayer) {
globalExpiryState = newExpiryState()
go func() {
for t := range globalExpiryState.expiryCh {
applyExpiryRule(ctx, objectAPI, t.objInfo, false, t.versionExpiry)
}
}()
}
type transitionState struct {
// add future metrics here
transitionCh chan ObjectInfo
@@ -206,16 +245,11 @@ func transitionSCInUse(ctx context.Context, lfc *lifecycle.Lifecycle, bucket, ar
}
// set PutObjectOptions for PUT operation to transition data to target cluster
func putTransitionOpts(objInfo ObjectInfo) (putOpts miniogo.PutObjectOptions) {
func putTransitionOpts(objInfo ObjectInfo) (putOpts miniogo.PutObjectOptions, err error) {
meta := make(map[string]string)
tag, err := tags.ParseObjectTags(objInfo.UserTags)
if err != nil {
return
}
putOpts = miniogo.PutObjectOptions{
UserMetadata: meta,
UserTags: tag.ToMap(),
ContentType: objInfo.ContentType,
ContentEncoding: objInfo.ContentEncoding,
StorageClass: objInfo.StorageClass,
@@ -225,29 +259,47 @@ func putTransitionOpts(objInfo ObjectInfo) (putOpts miniogo.PutObjectOptions) {
SourceETag: objInfo.ETag,
},
}
if mode, ok := objInfo.UserDefined[xhttp.AmzObjectLockMode]; ok {
if objInfo.UserTags != "" {
tag, _ := tags.ParseObjectTags(objInfo.UserTags)
if tag != nil {
putOpts.UserTags = tag.ToMap()
}
}
lkMap := caseInsensitiveMap(objInfo.UserDefined)
if lang, ok := lkMap.Lookup(xhttp.ContentLanguage); ok {
putOpts.ContentLanguage = lang
}
if disp, ok := lkMap.Lookup(xhttp.ContentDisposition); ok {
putOpts.ContentDisposition = disp
}
if cc, ok := lkMap.Lookup(xhttp.CacheControl); ok {
putOpts.CacheControl = cc
}
if mode, ok := lkMap.Lookup(xhttp.AmzObjectLockMode); ok {
rmode := miniogo.RetentionMode(mode)
putOpts.Mode = rmode
}
if retainDateStr, ok := objInfo.UserDefined[xhttp.AmzObjectLockRetainUntilDate]; ok {
if retainDateStr, ok := lkMap.Lookup(xhttp.AmzObjectLockRetainUntilDate); ok {
rdate, err := time.Parse(time.RFC3339, retainDateStr)
if err != nil {
return
return putOpts, err
}
putOpts.RetainUntilDate = rdate
}
if lhold, ok := objInfo.UserDefined[xhttp.AmzObjectLockLegalHold]; ok {
if lhold, ok := lkMap.Lookup(xhttp.AmzObjectLockLegalHold); ok {
putOpts.LegalHold = miniogo.LegalHoldStatus(lhold)
}
return
return putOpts, nil
}
// handle deletes of transitioned objects or object versions when one of the following is true:
// 1. temporarily restored copies of objects (restored with the PostRestoreObject API) expired.
// 2. life cycle expiry date is met on the object.
// 3. Object is removed through DELETE api call
func deleteTransitionedObject(ctx context.Context, objectAPI ObjectLayer, bucket, object string, lcOpts lifecycle.ObjectOpts, action lifecycle.Action, isDeleteTierOnly bool) error {
func deleteTransitionedObject(ctx context.Context, objectAPI ObjectLayer, bucket, object string, lcOpts lifecycle.ObjectOpts, restoredObject, isDeleteTierOnly bool) error {
if lcOpts.TransitionStatus == "" && !isDeleteTierOnly {
return nil
}
@@ -267,46 +319,41 @@ func deleteTransitionedObject(ctx context.Context, objectAPI ObjectLayer, bucket
var opts ObjectOptions
opts.Versioned = globalBucketVersioningSys.Enabled(bucket)
opts.VersionID = lcOpts.VersionID
switch action {
case lifecycle.DeleteRestoredAction, lifecycle.DeleteRestoredVersionAction:
if restoredObject {
// delete locally restored copy of object or object version
// from the source, while leaving metadata behind. The data on
// transitioned tier lies untouched and still accessible
opts.TransitionStatus = lcOpts.TransitionStatus
_, err = objectAPI.DeleteObject(ctx, bucket, object, opts)
return err
case lifecycle.DeleteAction, lifecycle.DeleteVersionAction:
// When an object is past expiry, delete the data from transitioned tier and
// metadata from source
if err := tgt.RemoveObject(context.Background(), arn.Bucket, object, miniogo.RemoveObjectOptions{VersionID: lcOpts.VersionID}); err != nil {
logger.LogIf(ctx, err)
}
if isDeleteTierOnly {
return nil
}
_, err = objectAPI.DeleteObject(ctx, bucket, object, opts)
if err != nil {
return err
}
eventName := event.ObjectRemovedDelete
if lcOpts.DeleteMarker {
eventName = event.ObjectRemovedDeleteMarkerCreated
}
objInfo := ObjectInfo{
Name: object,
VersionID: lcOpts.VersionID,
DeleteMarker: lcOpts.DeleteMarker,
}
// Notify object deleted event.
sendEvent(eventArgs{
EventName: eventName,
BucketName: bucket,
Object: objInfo,
Host: "Internal: [ILM-EXPIRY]",
})
}
// When an object is past expiry, delete the data from transitioned tier and
// metadata from source
if err := tgt.RemoveObject(context.Background(), arn.Bucket, object, miniogo.RemoveObjectOptions{VersionID: lcOpts.VersionID}); err != nil {
logger.LogIf(ctx, err)
}
if isDeleteTierOnly {
return nil
}
objInfo, err := objectAPI.DeleteObject(ctx, bucket, object, opts)
if err != nil {
return err
}
eventName := event.ObjectRemovedDelete
if lcOpts.DeleteMarker {
eventName = event.ObjectRemovedDeleteMarkerCreated
}
// Notify object deleted event.
sendEvent(eventArgs{
EventName: eventName,
BucketName: bucket,
Object: objInfo,
Host: "Internal: [ILM-EXPIRY]",
})
// should never reach here
return nil
}
@@ -341,13 +388,19 @@ func transitionObject(ctx context.Context, objectAPI ObjectLayer, objInfo Object
return err
}
oi := gr.ObjInfo
if oi.TransitionStatus == lifecycle.TransitionComplete {
gr.Close()
return nil
}
putOpts := putTransitionOpts(oi)
if _, err = tgt.PutObject(ctx, arn.Bucket, oi.Name, gr, oi.Size, "", "", putOpts); err != nil {
putOpts, err := putTransitionOpts(oi)
if err != nil {
gr.Close()
return err
}
if _, err = tgt.PutObject(ctx, arn.Bucket, oi.Name, gr, oi.Size, putOpts); err != nil {
gr.Close()
return err
}
gr.Close()
@@ -358,20 +411,19 @@ func transitionObject(ctx context.Context, objectAPI ObjectLayer, objInfo Object
opts.TransitionStatus = lifecycle.TransitionComplete
eventName := event.ObjectTransitionComplete
_, err = objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts)
objInfo, err = objectAPI.DeleteObject(ctx, oi.Bucket, oi.Name, opts)
if err != nil {
eventName = event.ObjectTransitionFailed
}
// Notify object deleted event.
sendEvent(eventArgs{
EventName: eventName,
BucketName: oi.Bucket,
Object: ObjectInfo{
Name: oi.Name,
VersionID: opts.VersionID,
},
Host: "Internal: [ILM-Transition]",
BucketName: objInfo.Bucket,
Object: objInfo,
Host: "Internal: [ILM-Transition]",
})
return err
}
@@ -421,7 +473,7 @@ func getTransitionedObjectReader(ctx context.Context, bucket, object string, rs
}
}
reader, _, _, err := tgt.GetObject(ctx, arn.Bucket, object, gopts)
reader, err := tgt.GetObject(ctx, arn.Bucket, object, gopts)
if err != nil {
return nil, err
}
@@ -547,7 +599,7 @@ func (r *RestoreObjectRequest) validate(ctx context.Context, objAPI ObjectLayer)
if r.OutputLocation.S3.Prefix == "" {
return fmt.Errorf("Prefix is a required parameter in OutputLocation")
}
if r.OutputLocation.S3.Encryption.EncryptionType != crypto.SSEAlgorithmAES256 {
if r.OutputLocation.S3.Encryption.EncryptionType != xhttp.AmzEncryptionAES {
return NotImplemented{}
}
}
@@ -573,7 +625,7 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
}
meta[xhttp.AmzObjectTagging] = rreq.OutputLocation.S3.Tagging.String()
if rreq.OutputLocation.S3.Encryption.EncryptionType != "" {
meta[crypto.SSEHeader] = crypto.SSEAlgorithmAES256
meta[xhttp.AmzServerSideEncryption] = xhttp.AmzEncryptionAES
}
return ObjectOptions{
Versioned: globalBucketVersioningSys.Enabled(bucket),
@@ -640,11 +692,11 @@ func restoreTransitionedObject(ctx context.Context, bucket, object string, objAP
return err
}
defer gr.Close()
hashReader, err := hash.NewReader(gr, objInfo.Size, "", "", objInfo.Size, globalCLIContext.StrictS3Compat)
hashReader, err := hash.NewReader(gr, objInfo.Size, "", "", objInfo.Size)
if err != nil {
return err
}
pReader := NewPutObjReader(hashReader, nil, nil)
pReader := NewPutObjReader(hashReader)
opts := putRestoreOpts(bucket, object, rreq, objInfo)
opts.UserDefined[xhttp.AmzRestore] = fmt.Sprintf("ongoing-request=%t, expiry-date=%s", false, restoreExpiry.Format(http.TimeFormat))
if _, err := objAPI.PutObject(ctx, bucket, object, pReader, opts); err != nil {

View File

@@ -26,32 +26,25 @@ import (
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/handlers"
"github.com/minio/minio/pkg/sync/errgroup"
)
func concurrentDecryptETag(ctx context.Context, objects []ObjectInfo) {
inParallel := func(objects []ObjectInfo) {
g := errgroup.WithNErrs(len(objects))
for index := range objects {
index := index
g.Go(func() error {
objects[index].ETag = objects[index].GetActualETag(nil)
objects[index].Size, _ = objects[index].GetActualSize()
return nil
}, index)
}
g.Wait()
}
const maxConcurrent = 500
for {
if len(objects) < maxConcurrent {
inParallel(objects)
return
}
inParallel(objects[:maxConcurrent])
objects = objects[maxConcurrent:]
g := errgroup.WithNErrs(len(objects)).WithConcurrency(500)
_, cancel := g.WithCancelOnError(ctx)
defer cancel()
for index := range objects {
index := index
g.Go(func() error {
size, err := objects[index].GetActualSize()
if err == nil {
objects[index].Size = size
}
objects[index].ETag = objects[index].GetActualETag(nil)
return nil
}, index)
}
g.WaitErr()
}
// Validate all the ListObjects query arguments, returns an APIErrorCode
@@ -82,7 +75,7 @@ func validateListObjectsArgs(marker, delimiter, encodingType string, maxKeys int
func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectVersions")
defer logger.AuditLog(w, r, "ListObjectVersions", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -143,7 +136,7 @@ func (api objectAPIHandlers) ListObjectVersionsHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectsV2M")
defer logger.AuditLog(w, r, "ListObjectsV2M", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -210,7 +203,7 @@ func (api objectAPIHandlers) ListObjectsV2MHandler(w http.ResponseWriter, r *htt
func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectsV2")
defer logger.AuditLog(w, r, "ListObjectsV2", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -301,10 +294,6 @@ func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http
return proxyRequest(ctx, w, r, ep)
}
func proxyRequestByStringHash(ctx context.Context, w http.ResponseWriter, r *http.Request, str string) (success bool) {
return proxyRequestByNodeIndex(ctx, w, r, crcHashMod(str, len(globalProxyEndpoints)))
}
// ListObjectsV1Handler - GET Bucket (List Objects) Version 1.
// --------------------------
// This implementation of the GET operation returns some or all (up to 10000)
@@ -314,7 +303,7 @@ func proxyRequestByStringHash(ctx context.Context, w http.ResponseWriter, r *htt
func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "ListObjectsV1")
defer logger.AuditLog(w, r, "ListObjectsV1", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -343,15 +332,6 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}
// Forward the request using Source IP or bucket
forwardStr := handlers.GetSourceIPFromHeaders(r)
if forwardStr == "" {
forwardStr = bucket
}
if proxyRequestByStringHash(ctx, w, r, forwardStr) {
return
}
listObjects := objectAPI.ListObjects
// Inititate a list objects operation based on the input params.

View File

@@ -24,6 +24,7 @@ import (
"sync"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/crypto"
"github.com/minio/minio/cmd/logger"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
"github.com/minio/minio/pkg/bucket/lifecycle"
@@ -168,7 +169,10 @@ func (sys *BucketMetadataSys) Update(bucket string, configFile string, configDat
}
meta.ReplicationConfigXML = configData
case bucketTargetsFile:
meta.BucketTargetsConfigJSON = configData
meta.BucketTargetsConfigJSON, meta.BucketTargetsConfigMetaJSON, err = encryptBucketMetadata(meta.Name, configData, crypto.Context{bucket: meta.Name, bucketTargetsFile: bucketTargetsFile})
if err != nil {
return fmt.Errorf("Error encrypting bucket target metadata %w", err)
}
default:
return fmt.Errorf("Unknown bucket %s metadata update requested %s", bucket, configFile)
}
@@ -191,7 +195,6 @@ func (sys *BucketMetadataSys) Update(bucket string, configFile string, configDat
// This function should only be used with
// - GetBucketInfo
// - ListBuckets
// - ListBucketsHeal (only in case of erasure coding mode)
// For all other bucket specific metadata, use the relevant
// calls implemented specifically for each of those features.
func (sys *BucketMetadataSys) Get(bucket string) (BucketMetadata, error) {
@@ -440,6 +443,10 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []Buck
for index := range buckets {
index := index
g.Go(func() error {
_, _ = objAPI.HealBucket(ctx, buckets[index].Name, madmin.HealOpts{
// Ensure heal opts for bucket metadata be deep healed all the time.
ScanMode: madmin.HealDeepScan,
})
meta, err := loadBucketMetadata(ctx, objAPI, buckets[index].Name)
if err != nil {
return err
@@ -470,6 +477,15 @@ func (sys *BucketMetadataSys) load(ctx context.Context, buckets []BucketInfo, ob
}
}
// Reset the state of the BucketMetadataSys.
func (sys *BucketMetadataSys) Reset() {
sys.Lock()
for k := range sys.metadataMap {
delete(sys.metadataMap, k)
}
sys.Unlock()
}
// NewBucketMetadataSys - creates new policy system.
func NewBucketMetadataSys() *BucketMetadataSys {
return &BucketMetadataSys{

View File

@@ -19,6 +19,7 @@ package cmd
import (
"bytes"
"context"
"crypto/rand"
"encoding/binary"
"encoding/json"
"encoding/xml"
@@ -28,6 +29,7 @@ import (
"time"
"github.com/minio/minio-go/v7/pkg/tags"
"github.com/minio/minio/cmd/crypto"
"github.com/minio/minio/cmd/logger"
bucketsse "github.com/minio/minio/pkg/bucket/encryption"
"github.com/minio/minio/pkg/bucket/lifecycle"
@@ -37,6 +39,7 @@ import (
"github.com/minio/minio/pkg/bucket/versioning"
"github.com/minio/minio/pkg/event"
"github.com/minio/minio/pkg/madmin"
"github.com/minio/sio"
)
const (
@@ -61,31 +64,33 @@ var (
// bucketMetadataFormat refers to the format.
// bucketMetadataVersion can be used to track a rolling upgrade of a field.
type BucketMetadata struct {
Name string
Created time.Time
LockEnabled bool // legacy not used anymore.
PolicyConfigJSON []byte
NotificationConfigXML []byte
LifecycleConfigXML []byte
ObjectLockConfigXML []byte
VersioningConfigXML []byte
EncryptionConfigXML []byte
TaggingConfigXML []byte
QuotaConfigJSON []byte
ReplicationConfigXML []byte
BucketTargetsConfigJSON []byte
Name string
Created time.Time
LockEnabled bool // legacy not used anymore.
PolicyConfigJSON []byte
NotificationConfigXML []byte
LifecycleConfigXML []byte
ObjectLockConfigXML []byte
VersioningConfigXML []byte
EncryptionConfigXML []byte
TaggingConfigXML []byte
QuotaConfigJSON []byte
ReplicationConfigXML []byte
BucketTargetsConfigJSON []byte
BucketTargetsConfigMetaJSON []byte
// Unexported fields. Must be updated atomically.
policyConfig *policy.Policy
notificationConfig *event.Config
lifecycleConfig *lifecycle.Lifecycle
objectLockConfig *objectlock.Config
versioningConfig *versioning.Versioning
sseConfig *bucketsse.BucketSSEConfig
taggingConfig *tags.Tags
quotaConfig *madmin.BucketQuota
replicationConfig *replication.Config
bucketTargetConfig *madmin.BucketTargets
policyConfig *policy.Policy
notificationConfig *event.Config
lifecycleConfig *lifecycle.Lifecycle
objectLockConfig *objectlock.Config
versioningConfig *versioning.Versioning
sseConfig *bucketsse.BucketSSEConfig
taggingConfig *tags.Tags
quotaConfig *madmin.BucketQuota
replicationConfig *replication.Config
bucketTargetConfig *madmin.BucketTargets
bucketTargetConfigMeta map[string]string
}
// newBucketMetadata creates BucketMetadata with the supplied name and Created to Now.
@@ -100,7 +105,8 @@ func newBucketMetadata(name string) BucketMetadata {
versioningConfig: &versioning.Versioning{
XMLNS: "http://s3.amazonaws.com/doc/2006-03-01/",
},
bucketTargetConfig: &madmin.BucketTargets{},
bucketTargetConfig: &madmin.BucketTargets{},
bucketTargetConfigMeta: make(map[string]string),
}
}
@@ -140,16 +146,16 @@ func (b *BucketMetadata) Load(ctx context.Context, api ObjectLayer, name string)
func loadBucketMetadata(ctx context.Context, objectAPI ObjectLayer, bucket string) (BucketMetadata, error) {
b := newBucketMetadata(bucket)
err := b.Load(ctx, objectAPI, b.Name)
if err == nil {
return b, b.convertLegacyConfigs(ctx, objectAPI)
}
if !errors.Is(err, errConfigNotFound) {
if err != nil && !errors.Is(err, errConfigNotFound) {
return b, err
}
// Old bucket without bucket metadata. Hence we migrate existing settings.
return b, b.convertLegacyConfigs(ctx, objectAPI)
if err := b.convertLegacyConfigs(ctx, objectAPI); err != nil {
return b, err
}
// migrate unencrypted remote targets
return b, b.migrateTargetConfig(ctx, objectAPI)
}
// parseAllConfigs will parse all configs and populate the private fields.
@@ -234,7 +240,8 @@ func (b *BucketMetadata) parseAllConfigs(ctx context.Context, objectAPI ObjectLa
}
if len(b.BucketTargetsConfigJSON) != 0 {
if err = json.Unmarshal(b.BucketTargetsConfigJSON, b.bucketTargetConfig); err != nil {
b.bucketTargetConfig, err = parseBucketTargetConfig(b.Name, b.BucketTargetsConfigJSON, b.BucketTargetsConfigMetaJSON)
if err != nil {
return err
}
} else {
@@ -354,6 +361,7 @@ func (b *BucketMetadata) Save(ctx context.Context, api ObjectLayer) error {
if err != nil {
return err
}
configFile := path.Join(bucketConfigPrefix, b.Name, bucketMetadataFile)
return saveConfig(ctx, api, configFile, data)
}
@@ -373,3 +381,76 @@ func deleteBucketMetadata(ctx context.Context, obj objectDeleter, bucket string)
}
return nil
}
// migrate config for remote targets by encrypting data if currently unencrypted and kms is configured.
func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI ObjectLayer) error {
var err error
// early return if no targets or already encrypted
if len(b.BucketTargetsConfigJSON) == 0 || GlobalKMS == nil || len(b.BucketTargetsConfigMetaJSON) != 0 {
return nil
}
encBytes, metaBytes, err := encryptBucketMetadata(b.Name, b.BucketTargetsConfigJSON, crypto.Context{b.Name: b.Name, bucketTargetsFile: bucketTargetsFile})
if err != nil {
return err
}
b.BucketTargetsConfigJSON = encBytes
b.BucketTargetsConfigMetaJSON = metaBytes
return b.Save(ctx, objectAPI)
}
// encrypt bucket metadata if kms is configured.
func encryptBucketMetadata(bucket string, input []byte, kmsContext crypto.Context) (output, metabytes []byte, err error) {
var sealedKey crypto.SealedKey
if GlobalKMS == nil {
output = input
return
}
var (
key [32]byte
encKey []byte
)
metadata := make(map[string]string)
key, encKey, err = GlobalKMS.GenerateKey(GlobalKMS.DefaultKeyID(), kmsContext)
if err != nil {
return
}
outbuf := bytes.NewBuffer(nil)
objectKey := crypto.GenerateKey(key, rand.Reader)
sealedKey = objectKey.Seal(key, crypto.GenerateIV(rand.Reader), crypto.S3.String(), bucket, "")
crypto.S3.CreateMetadata(metadata, GlobalKMS.DefaultKeyID(), encKey, sealedKey)
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
if err != nil {
return output, metabytes, err
}
metabytes, err = json.Marshal(metadata)
if err != nil {
return
}
return outbuf.Bytes(), metabytes, nil
}
// decrypt bucket metadata if kms is configured.
func decryptBucketMetadata(input []byte, bucket string, meta map[string]string, kmsContext crypto.Context) ([]byte, error) {
if GlobalKMS == nil {
return nil, errKMSNotConfigured
}
keyID, kmsKey, sealedKey, err := crypto.S3.ParseMetadata(meta)
if err != nil {
return nil, err
}
extKey, err := GlobalKMS.UnsealKey(keyID, kmsKey, kmsContext)
if err != nil {
return nil, err
}
var objectKey crypto.ObjectKey
if err = objectKey.Unseal(extKey, sealedKey, crypto.S3.String(), bucket, ""); err != nil {
return nil, err
}
outbuf := bytes.NewBuffer(nil)
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
return outbuf.Bytes(), err
}

View File

@@ -102,6 +102,12 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
err = msgp.WrapError(err, "BucketTargetsConfigJSON")
return
}
case "BucketTargetsConfigMetaJSON":
z.BucketTargetsConfigMetaJSON, err = dc.ReadBytes(z.BucketTargetsConfigMetaJSON)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
default:
err = dc.Skip()
if err != nil {
@@ -115,9 +121,9 @@ func (z *BucketMetadata) DecodeMsg(dc *msgp.Reader) (err error) {
// EncodeMsg implements msgp.Encodable
func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
// map header, size 13
// map header, size 14
// write "Name"
err = en.Append(0x8d, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
err = en.Append(0x8e, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
if err != nil {
return
}
@@ -246,15 +252,25 @@ func (z *BucketMetadata) EncodeMsg(en *msgp.Writer) (err error) {
err = msgp.WrapError(err, "BucketTargetsConfigJSON")
return
}
// write "BucketTargetsConfigMetaJSON"
err = en.Append(0xbb, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4d, 0x65, 0x74, 0x61, 0x4a, 0x53, 0x4f, 0x4e)
if err != nil {
return
}
err = en.WriteBytes(z.BucketTargetsConfigMetaJSON)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
return
}
// MarshalMsg implements msgp.Marshaler
func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
o = msgp.Require(b, z.Msgsize())
// map header, size 13
// map header, size 14
// string "Name"
o = append(o, 0x8d, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = append(o, 0x8e, 0xa4, 0x4e, 0x61, 0x6d, 0x65)
o = msgp.AppendString(o, z.Name)
// string "Created"
o = append(o, 0xa7, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64)
@@ -292,6 +308,9 @@ func (z *BucketMetadata) MarshalMsg(b []byte) (o []byte, err error) {
// string "BucketTargetsConfigJSON"
o = append(o, 0xb7, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4a, 0x53, 0x4f, 0x4e)
o = msgp.AppendBytes(o, z.BucketTargetsConfigJSON)
// string "BucketTargetsConfigMetaJSON"
o = append(o, 0xbb, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x4d, 0x65, 0x74, 0x61, 0x4a, 0x53, 0x4f, 0x4e)
o = msgp.AppendBytes(o, z.BucketTargetsConfigMetaJSON)
return
}
@@ -391,6 +410,12 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
err = msgp.WrapError(err, "BucketTargetsConfigJSON")
return
}
case "BucketTargetsConfigMetaJSON":
z.BucketTargetsConfigMetaJSON, bts, err = msgp.ReadBytesBytes(bts, z.BucketTargetsConfigMetaJSON)
if err != nil {
err = msgp.WrapError(err, "BucketTargetsConfigMetaJSON")
return
}
default:
bts, err = msgp.Skip(bts)
if err != nil {
@@ -405,6 +430,6 @@ func (z *BucketMetadata) UnmarshalMsg(bts []byte) (o []byte, err error) {
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
func (z *BucketMetadata) Msgsize() (s int) {
s = 1 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON)
s = 1 + 5 + msgp.StringPrefixSize + len(z.Name) + 8 + msgp.TimeSize + 12 + msgp.BoolSize + 17 + msgp.BytesPrefixSize + len(z.PolicyConfigJSON) + 22 + msgp.BytesPrefixSize + len(z.NotificationConfigXML) + 19 + msgp.BytesPrefixSize + len(z.LifecycleConfigXML) + 20 + msgp.BytesPrefixSize + len(z.ObjectLockConfigXML) + 20 + msgp.BytesPrefixSize + len(z.VersioningConfigXML) + 20 + msgp.BytesPrefixSize + len(z.EncryptionConfigXML) + 17 + msgp.BytesPrefixSize + len(z.TaggingConfigXML) + 16 + msgp.BytesPrefixSize + len(z.QuotaConfigJSON) + 21 + msgp.BytesPrefixSize + len(z.ReplicationConfigXML) + 24 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigJSON) + 28 + msgp.BytesPrefixSize + len(z.BucketTargetsConfigMetaJSON)
return
}

View File

@@ -39,7 +39,7 @@ const (
func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketNotification")
defer logger.AuditLog(w, r, "GetBucketNotification", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucketName := vars["bucket"]
@@ -111,7 +111,7 @@ func (api objectAPIHandlers) GetBucketNotificationHandler(w http.ResponseWriter,
func (api objectAPIHandlers) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketNotification")
defer logger.AuditLog(w, r, "PutBucketNotification", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objectAPI := api.ObjectAPI()
if objectAPI == nil {

View File

@@ -97,6 +97,9 @@ func enforceRetentionBypassForDelete(ctx context.Context, r *http.Request, bucke
return ErrNone
}
}
if isErrObjectNotFound(gerr) || isErrVersionNotFound(gerr) {
return ErrNone
}
return toAPIErrorCode(ctx, gerr)
}

View File

@@ -40,7 +40,7 @@ const (
func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketPolicy")
defer logger.AuditLog(w, r, "PutBucketPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -106,7 +106,7 @@ func (api objectAPIHandlers) PutBucketPolicyHandler(w http.ResponseWriter, r *ht
func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "DeleteBucketPolicy")
defer logger.AuditLog(w, r, "DeleteBucketPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {
@@ -141,7 +141,7 @@ func (api objectAPIHandlers) DeleteBucketPolicyHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketPolicy")
defer logger.AuditLog(w, r, "GetBucketPolicy", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
objAPI := api.ObjectAPI()
if objAPI == nil {

View File

@@ -83,17 +83,38 @@ func getConditionValues(r *http.Request, lc string, username string, claims map[
}
}
authType := getRequestAuthType(r)
var signatureVersion string
switch authType {
case authTypeSignedV2, authTypePresignedV2:
signatureVersion = signV2Algorithm
case authTypeSigned, authTypePresigned, authTypeStreamingSigned, authTypePostPolicy:
signatureVersion = signV4Algorithm
}
var authtype string
switch authType {
case authTypePresignedV2, authTypePresigned:
authtype = "REST-QUERY-STRING"
case authTypeSignedV2, authTypeSigned, authTypeStreamingSigned:
authtype = "REST-HEADER"
case authTypePostPolicy:
authtype = "POST"
}
args := map[string][]string{
"CurrentTime": {currTime.Format(time.RFC3339)},
"EpochTime": {strconv.FormatInt(currTime.Unix(), 10)},
"SecureTransport": {strconv.FormatBool(r.TLS != nil)},
"SourceIp": {handlers.GetSourceIP(r)},
"UserAgent": {r.UserAgent()},
"Referer": {r.Referer()},
"principaltype": {principalType},
"userid": {username},
"username": {username},
"versionid": {vid},
"CurrentTime": {currTime.Format(time.RFC3339)},
"EpochTime": {strconv.FormatInt(currTime.Unix(), 10)},
"SecureTransport": {strconv.FormatBool(r.TLS != nil)},
"SourceIp": {handlers.GetSourceIP(r)},
"UserAgent": {r.UserAgent()},
"Referer": {r.Referer()},
"principaltype": {principalType},
"userid": {username},
"username": {username},
"versionid": {vid},
"signatureversion": {signatureVersion},
"authType": {authtype},
}
if lc != "" {

File diff suppressed because it is too large Load Diff

View File

@@ -18,24 +18,31 @@ package cmd
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"net/http"
"strings"
"sync"
"sync/atomic"
"time"
minio "github.com/minio/minio-go/v7"
miniogo "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio/cmd/crypto"
"github.com/minio/minio/pkg/bucket/versioning"
"github.com/minio/minio/pkg/madmin"
sha256 "github.com/minio/sha256-simd"
)
const (
defaultHealthCheckDuration = 100 * time.Second
)
// BucketTargetSys represents bucket targets subsystem
type BucketTargetSys struct {
sync.RWMutex
arnRemotesMap map[string]*miniogo.Core
arnRemotesMap map[string]*TargetClient
targetsMap map[string][]madmin.BucketTarget
}
@@ -66,7 +73,6 @@ func (sys *BucketTargetSys) ListTargets(ctx context.Context, bucket, arnType str
// ListBucketTargets - gets list of bucket targets for this bucket.
func (sys *BucketTargetSys) ListBucketTargets(ctx context.Context, bucket string) (*madmin.BucketTargets, error) {
sys.RLock()
defer sys.RUnlock()
@@ -94,7 +100,7 @@ func (sys *BucketTargetSys) SetTarget(ctx context.Context, bucket string, tgt *m
if minio.ToErrorResponse(err).Code == "NoSuchBucket" {
return BucketRemoteTargetNotFound{Bucket: tgt.TargetBucket}
}
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket}
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket, Err: err}
}
if tgt.Type == madmin.ReplicationService {
if !globalIsErasure {
@@ -105,7 +111,7 @@ func (sys *BucketTargetSys) SetTarget(ctx context.Context, bucket string, tgt *m
}
vcfg, err := clnt.GetBucketVersioning(ctx, tgt.TargetBucket)
if err != nil {
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket}
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket, Err: err}
}
if vcfg.Status != string(versioning.Enabled) {
return BucketRemoteTargetNotVersioned{Bucket: tgt.TargetBucket}
@@ -118,7 +124,7 @@ func (sys *BucketTargetSys) SetTarget(ctx context.Context, bucket string, tgt *m
if minio.ToErrorResponse(err).Code == "NoSuchBucket" {
return BucketRemoteTargetNotFound{Bucket: tgt.TargetBucket}
}
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket}
return BucketRemoteConnectionErr{Bucket: tgt.TargetBucket, Err: err}
}
if vcfg.Status != string(versioning.Enabled) {
return BucketRemoteTargetNotVersioned{Bucket: tgt.TargetBucket}
@@ -130,7 +136,7 @@ func (sys *BucketTargetSys) SetTarget(ctx context.Context, bucket string, tgt *m
tgts := sys.targetsMap[bucket]
newtgts := make([]madmin.BucketTarget, len(tgts))
labels := make(map[string]struct{})
labels := make(map[string]struct{}, len(tgts))
found := false
for idx, t := range tgts {
labels[t.Label] = struct{}{}
@@ -196,9 +202,12 @@ func (sys *BucketTargetSys) RemoveTarget(ctx context.Context, bucket, arnStr str
// delete ARN type from list of matching targets
sys.Lock()
defer sys.Unlock()
targets := make([]madmin.BucketTarget, 0)
found := false
tgts := sys.targetsMap[bucket]
tgts, ok := sys.targetsMap[bucket]
if !ok {
return BucketRemoteTargetNotFound{Bucket: bucket}
}
targets := make([]madmin.BucketTarget, 0, len(tgts))
for _, tgt := range tgts {
if tgt.Arn != arnStr {
targets = append(targets, tgt)
@@ -215,7 +224,7 @@ func (sys *BucketTargetSys) RemoveTarget(ctx context.Context, bucket, arnStr str
}
// GetRemoteTargetClient returns minio-go client for replication target instance
func (sys *BucketTargetSys) GetRemoteTargetClient(ctx context.Context, arn string) *miniogo.Core {
func (sys *BucketTargetSys) GetRemoteTargetClient(ctx context.Context, arn string) *TargetClient {
sys.RLock()
defer sys.RUnlock()
return sys.arnRemotesMap[arn]
@@ -262,7 +271,7 @@ func (sys *BucketTargetSys) GetRemoteLabelWithArn(ctx context.Context, bucket, a
// NewBucketTargetSys - creates new replication system.
func NewBucketTargetSys() *BucketTargetSys {
return &BucketTargetSys{
arnRemotesMap: make(map[string]*miniogo.Core),
arnRemotesMap: make(map[string]*TargetClient),
targetsMap: make(map[string][]madmin.BucketTarget),
}
}
@@ -343,20 +352,34 @@ var getRemoteTargetInstanceTransport http.RoundTripper
var getRemoteTargetInstanceTransportOnce sync.Once
// Returns a minio-go Client configured to access remote host described in replication target config.
func (sys *BucketTargetSys) getRemoteTargetClient(tcfg *madmin.BucketTarget) (*miniogo.Core, error) {
func (sys *BucketTargetSys) getRemoteTargetClient(tcfg *madmin.BucketTarget) (*TargetClient, error) {
config := tcfg.Credentials
creds := credentials.NewStaticV4(config.AccessKey, config.SecretKey, "")
getRemoteTargetInstanceTransportOnce.Do(func() {
getRemoteTargetInstanceTransport = newGatewayHTTPTransport(1 * time.Hour)
getRemoteTargetInstanceTransport = newGatewayHTTPTransport(10 * time.Minute)
})
core, err := miniogo.NewCore(tcfg.URL().Host, &miniogo.Options{
api, err := minio.New(tcfg.Endpoint, &miniogo.Options{
Creds: creds,
Secure: tcfg.Secure,
Region: tcfg.Region,
Transport: getRemoteTargetInstanceTransport,
})
return core, err
if err != nil {
return nil, err
}
hcDuration := defaultHealthCheckDuration
if tcfg.HealthCheckDuration >= 1 { // require minimum health check duration of 1 sec.
hcDuration = tcfg.HealthCheckDuration
}
tc := &TargetClient{
Client: api,
healthCheckDuration: hcDuration,
bucket: tcfg.TargetBucket,
replicateSync: tcfg.ReplicationSync,
}
go tc.healthCheck()
return tc, nil
}
// getRemoteARN gets existing ARN for an endpoint or generates a new one.
@@ -391,3 +414,58 @@ func generateARN(t *madmin.BucketTarget) string {
}
return arn.String()
}
// Returns parsed target config. If KMS is configured, remote target is decrypted
func parseBucketTargetConfig(bucket string, cdata, cmetadata []byte) (*madmin.BucketTargets, error) {
var (
data []byte
err error
t madmin.BucketTargets
meta map[string]string
)
if len(cdata) == 0 {
return nil, nil
}
data = cdata
if len(cmetadata) != 0 {
if err := json.Unmarshal(cmetadata, &meta); err != nil {
return nil, err
}
if crypto.S3.IsEncrypted(meta) {
if data, err = decryptBucketMetadata(cdata, bucket, meta, crypto.Context{bucket: bucket, bucketTargetsFile: bucketTargetsFile}); err != nil {
return nil, err
}
}
}
if err = json.Unmarshal(data, &t); err != nil {
return nil, err
}
return &t, nil
}
// TargetClient is the struct for remote target client.
type TargetClient struct {
*miniogo.Client
up int32
healthCheckDuration time.Duration
bucket string // remote bucket target
replicateSync bool
}
func (tc *TargetClient) isOffline() bool {
return atomic.LoadInt32(&tc.up) == 0
}
func (tc *TargetClient) healthCheck() {
for {
_, err := tc.BucketExists(GlobalContext, tc.bucket)
if err != nil {
atomic.StoreInt32(&tc.up, 0)
time.Sleep(tc.healthCheckDuration)
continue
}
atomic.StoreInt32(&tc.up, 1)
time.Sleep(tc.healthCheckDuration)
}
}

View File

@@ -40,7 +40,7 @@ const (
func (api objectAPIHandlers) PutBucketVersioningHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "PutBucketVersioning")
defer logger.AuditLog(w, r, "PutBucketVersioning", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]
@@ -98,7 +98,7 @@ func (api objectAPIHandlers) PutBucketVersioningHandler(w http.ResponseWriter, r
func (api objectAPIHandlers) GetBucketVersioningHandler(w http.ResponseWriter, r *http.Request) {
ctx := newContext(r, w, "GetBucketVersioning")
defer logger.AuditLog(w, r, "GetBucketVersioning", mustGetClaimsFromToken(r))
defer logger.AuditLog(ctx, w, r, mustGetClaimsFromToken(r))
vars := mux.Vars(r)
bucket := vars["bucket"]

View File

@@ -51,6 +51,11 @@ func (sys *BucketVersioningSys) Get(bucket string) (*versioning.Versioning, erro
return globalBucketMetadataSys.GetVersioningConfig(bucket)
}
// Reset BucketVersioningSys to initial state.
func (sys *BucketVersioningSys) Reset() {
// There is currently no internal state.
}
// NewBucketVersioningSys - creates new versioning system.
func NewBucketVersioningSys() *BucketVersioningSys {
return &BucketVersioningSys{}

View File

@@ -17,6 +17,7 @@
package cmd
import (
"context"
"crypto/x509"
"encoding/gob"
"errors"
@@ -27,9 +28,11 @@ import (
"os"
"path/filepath"
"runtime"
"sort"
"strings"
"time"
"github.com/fatih/color"
dns2 "github.com/miekg/dns"
"github.com/minio/cli"
"github.com/minio/minio-go/v7/pkg/set"
@@ -38,15 +41,41 @@ import (
"github.com/minio/minio/cmd/logger"
"github.com/minio/minio/pkg/auth"
"github.com/minio/minio/pkg/certs"
"github.com/minio/minio/pkg/console"
"github.com/minio/minio/pkg/env"
"github.com/minio/minio/pkg/handlers"
)
// serverDebugLog will enable debug printing
var serverDebugLog = env.Get("_MINIO_SERVER_DEBUG", config.EnableOff) == config.EnableOn
func init() {
rand.Seed(time.Now().UTC().UnixNano())
logger.Init(GOPATH, GOROOT)
logger.RegisterError(config.FmtError)
rand.Seed(time.Now().UTC().UnixNano())
globalDNSCache = xhttp.NewDNSCache(3*time.Second, 10*time.Second)
// Inject into config package.
config.Logger.Info = logger.Info
config.Logger.LogIf = logger.LogIf
globalDNSCache = xhttp.NewDNSCache(10*time.Second, 10*time.Second, logger.LogOnceIf)
initGlobalContext()
globalForwarder = handlers.NewForwarder(&handlers.Forwarder{
PassHost: true,
RoundTripper: newGatewayHTTPTransport(1 * time.Hour),
Logger: func(err error) {
if err != nil && !errors.Is(err, context.Canceled) {
logger.LogIf(GlobalContext, err)
}
},
})
globalTransitionState = newTransitionState()
console.SetColor("Debug", color.New())
gob.Register(StorageErr(""))
}
@@ -229,6 +258,14 @@ func handleCommonEnvVars() {
}
globalDomainNames = append(globalDomainNames, domainName)
}
sort.Strings(globalDomainNames)
lcpSuf := lcpSuffix(globalDomainNames)
for _, domainName := range globalDomainNames {
if domainName == lcpSuf && len(globalDomainNames) > 1 {
logger.Fatal(config.ErrOverlappingDomainValue(nil).Msg("Overlapping domains `%s` not allowed", globalDomainNames),
"Invalid MINIO_DOMAIN value in environment variable")
}
}
}
publicIPs := env.Get(config.EnvPublicIPs, "")
@@ -274,6 +311,16 @@ func handleCommonEnvVars() {
globalConfigEncrypted = true
}
if env.IsSet(config.EnvRootUser) || env.IsSet(config.EnvRootPassword) {
cred, err := auth.CreateCredentials(env.Get(config.EnvRootUser, ""), env.Get(config.EnvRootPassword, ""))
if err != nil {
logger.Fatal(config.ErrInvalidCredentials(err),
"Unable to validate credentials inherited from the shell environment")
}
globalActiveCred = cred
globalConfigEncrypted = true
}
if env.IsSet(config.EnvAccessKeyOld) && env.IsSet(config.EnvSecretKeyOld) {
oldCred, err := auth.CreateCredentials(env.Get(config.EnvAccessKeyOld, ""), env.Get(config.EnvSecretKeyOld, ""))
if err != nil {
@@ -284,6 +331,17 @@ func handleCommonEnvVars() {
os.Unsetenv(config.EnvAccessKeyOld)
os.Unsetenv(config.EnvSecretKeyOld)
}
if env.IsSet(config.EnvRootUserOld) && env.IsSet(config.EnvRootPasswordOld) {
oldCred, err := auth.CreateCredentials(env.Get(config.EnvRootUserOld, ""), env.Get(config.EnvRootPasswordOld, ""))
if err != nil {
logger.Fatal(config.ErrInvalidCredentials(err),
"Unable to validate the old credentials inherited from the shell environment")
}
globalOldCred = oldCred
os.Unsetenv(config.EnvRootUserOld)
os.Unsetenv(config.EnvRootPasswordOld)
}
}
func logStartupMessage(msg string) {

View File

@@ -20,6 +20,8 @@ import (
"bytes"
"context"
"errors"
"io/ioutil"
"net/http"
"github.com/minio/minio/pkg/hash"
)
@@ -27,9 +29,9 @@ import (
var errConfigNotFound = errors.New("config file not found")
func readConfig(ctx context.Context, objAPI ObjectLayer, configFile string) ([]byte, error) {
var buffer bytes.Buffer
// Read entire content by setting size to -1
if err := objAPI.GetObject(ctx, minioMetaBucket, configFile, 0, -1, &buffer, "", ObjectOptions{}); err != nil {
r, err := objAPI.GetObjectNInfo(ctx, minioMetaBucket, configFile, nil, http.Header{}, readLock, ObjectOptions{})
if err != nil {
// Treat object not found as config not found.
if isErrObjectNotFound(err) {
return nil, errConfigNotFound
@@ -37,13 +39,16 @@ func readConfig(ctx context.Context, objAPI ObjectLayer, configFile string) ([]b
return nil, err
}
defer r.Close()
// Return config not found on empty content.
if buffer.Len() == 0 {
buf, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
if len(buf) == 0 {
return nil, errConfigNotFound
}
return buffer.Bytes(), nil
return buf, nil
}
type objectDeleter interface {
@@ -59,12 +64,12 @@ func deleteConfig(ctx context.Context, objAPI objectDeleter, configFile string)
}
func saveConfig(ctx context.Context, objAPI ObjectLayer, configFile string, data []byte) error {
hashReader, err := hash.NewReader(bytes.NewReader(data), int64(len(data)), "", getSHA256Hash(data), int64(len(data)), globalCLIContext.StrictS3Compat)
hashReader, err := hash.NewReader(bytes.NewReader(data), int64(len(data)), "", getSHA256Hash(data), int64(len(data)))
if err != nil {
return err
}
_, err = objAPI.PutObject(ctx, minioMetaBucket, configFile, NewPutObjReader(hashReader, nil, nil), ObjectOptions{})
_, err = objAPI.PutObject(ctx, minioMetaBucket, configFile, NewPutObjReader(hashReader), ObjectOptions{MaxParity: true})
return err
}

View File

@@ -18,6 +18,7 @@ package cmd
import (
"context"
"crypto/tls"
"fmt"
"strings"
"sync"
@@ -26,7 +27,6 @@ import (
"github.com/minio/minio/cmd/config/api"
"github.com/minio/minio/cmd/config/cache"
"github.com/minio/minio/cmd/config/compress"
"github.com/minio/minio/cmd/config/crawler"
"github.com/minio/minio/cmd/config/dns"
"github.com/minio/minio/cmd/config/etcd"
"github.com/minio/minio/cmd/config/heal"
@@ -34,6 +34,7 @@ import (
"github.com/minio/minio/cmd/config/identity/openid"
"github.com/minio/minio/cmd/config/notify"
"github.com/minio/minio/cmd/config/policy/opa"
"github.com/minio/minio/cmd/config/scanner"
"github.com/minio/minio/cmd/config/storageclass"
"github.com/minio/minio/cmd/crypto"
xhttp "github.com/minio/minio/cmd/http"
@@ -59,7 +60,7 @@ func initHelp() {
config.LoggerWebhookSubSys: logger.DefaultKVS,
config.AuditWebhookSubSys: logger.DefaultAuditKVS,
config.HealSubSys: heal.DefaultKVS,
config.CrawlerSubSys: crawler.DefaultKVS,
config.ScannerSubSys: scanner.DefaultKVS,
}
for k, v := range notify.DefaultNotificationKVS {
kvs[k] = v
@@ -116,8 +117,8 @@ func initHelp() {
Description: "manage object healing frequency and bitrot verification checks",
},
config.HelpKV{
Key: config.CrawlerSubSys,
Description: "manage crawling for usage calculation, lifecycle, healing and more",
Key: config.ScannerSubSys,
Description: "manage namespace scanning for usage calculation, lifecycle, healing and more",
},
config.HelpKV{
Key: config.LoggerWebhookSubSys,
@@ -199,7 +200,7 @@ func initHelp() {
config.CacheSubSys: cache.Help,
config.CompressionSubSys: compress.Help,
config.HealSubSys: heal.Help,
config.CrawlerSubSys: crawler.Help,
config.ScannerSubSys: scanner.Help,
config.IdentityOpenIDSubSys: openid.Help,
config.IdentityLDAPSubSys: xldap.Help,
config.PolicyOPASubSys: opa.Help,
@@ -228,7 +229,7 @@ var (
globalServerConfigMu sync.RWMutex
)
func validateConfig(s config.Config, setDriveCount int) error {
func validateConfig(s config.Config, setDriveCounts []int) error {
// We must have a global lock for this so nobody else modifies env while we do.
defer env.LockSetEnv()()
@@ -251,8 +252,10 @@ func validateConfig(s config.Config, setDriveCount int) error {
}
if globalIsErasure {
if _, err := storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount); err != nil {
return err
for _, setDriveCount := range setDriveCounts {
if _, err := storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount); err != nil {
return err
}
}
}
@@ -271,11 +274,11 @@ func validateConfig(s config.Config, setDriveCount int) error {
}
}
if _, err := heal.LookupConfig(s[config.HealSubSys][config.Default]); err != nil {
if _, err = heal.LookupConfig(s[config.HealSubSys][config.Default]); err != nil {
return err
}
if _, err := crawler.LookupConfig(s[config.CrawlerSubSys][config.Default]); err != nil {
if _, err = scanner.LookupConfig(s[config.ScannerSubSys][config.Default]); err != nil {
return err
}
@@ -293,7 +296,10 @@ func validateConfig(s config.Config, setDriveCount int) error {
}
}
{
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), NewGatewayHTTPTransport())
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), newCustomHTTPTransportWithHTTP2(
&tls.Config{
RootCAs: globalRootCAs,
}, defaultDialTimeout)())
if err != nil {
return err
}
@@ -342,7 +348,7 @@ func validateConfig(s config.Config, setDriveCount int) error {
return notify.TestNotificationTargets(GlobalContext, s, NewGatewayHTTPTransport(), globalNotificationSys.ConfiguredTargetIDs())
}
func lookupConfigs(s config.Config, setDriveCount int) {
func lookupConfigs(s config.Config, setDriveCounts []int) {
ctx := GlobalContext
var err error
@@ -429,7 +435,7 @@ func lookupConfigs(s config.Config, setDriveCount int) {
logger.LogIf(ctx, fmt.Errorf("Invalid api configuration: %w", err))
}
globalAPIConfig.init(apiConfig, setDriveCount)
globalAPIConfig.init(apiConfig, setDriveCounts)
// Initialize remote instance transport once.
getRemoteInstanceTransportOnce.Do(func() {
@@ -437,9 +443,17 @@ func lookupConfigs(s config.Config, setDriveCount int) {
})
if globalIsErasure {
globalStorageClass, err = storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize storage class config: %w", err))
for i, setDriveCount := range setDriveCounts {
sc, err := storageclass.LookupConfig(s[config.StorageClassSubSys][config.Default], setDriveCount)
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize storage class config: %w", err))
break
}
// if we validated all setDriveCounts and it was successful
// proceed to store the correct storage class globally.
if i == len(setDriveCounts)-1 {
globalStorageClass = sc
}
}
}
@@ -461,7 +475,10 @@ func lookupConfigs(s config.Config, setDriveCount int) {
}
}
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), NewGatewayHTTPTransport())
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), newCustomHTTPTransportWithHTTP2(
&tls.Config{
RootCAs: globalRootCAs,
}, defaultDialTimeout)())
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to setup KMS config: %w", err))
}
@@ -470,11 +487,13 @@ func lookupConfigs(s config.Config, setDriveCount int) {
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to setup KMS with current KMS config: %w", err))
}
globalAutoEncryption = kmsCfg.AutoEncryption // Enable auto-encryption if enabled
// Enable auto-encryption if enabled
globalAutoEncryption = kmsCfg.AutoEncryption
if globalAutoEncryption && !globalIsGateway {
logger.LogIf(ctx, fmt.Errorf("%s env is deprecated please migrate to using `mc encrypt` at bucket level", crypto.EnvKMSAutoEncryption))
if kmsCfg.Vault.Enabled {
const deprecationWarning = `Native Hashicorp Vault support is deprecated and will be removed on 2021-10-01. Please migrate to KES + Hashicorp Vault: https://github.com/minio/kes/wiki/Hashicorp-Vault-Keystore
Note that native Hashicorp Vault and KES + Hashicorp Vault are not compatible.
If you need help to migrate smoothly visit: https://min.io/pricing`
logger.LogIf(ctx, fmt.Errorf(deprecationWarning))
}
globalOpenIDConfig, err = openid.LookupConfig(s[config.IdentityOpenIDSubSys][config.Default],
@@ -534,7 +553,7 @@ func lookupConfigs(s config.Config, setDriveCount int) {
http.WithAuthToken(l.AuthToken),
http.WithUserAgent(loggerUserAgent),
http.WithLogKind(string(logger.All)),
http.WithTransport(NewGatewayHTTPTransport()),
http.WithTransport(NewGatewayHTTPTransportWithClientCerts(l.ClientCert, l.ClientKey)),
),
); err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to initialize audit HTTP target: %w", err))
@@ -553,12 +572,16 @@ func lookupConfigs(s config.Config, setDriveCount int) {
}
// Apply dynamic config values
logger.LogIf(ctx, applyDynamicConfig(ctx, s))
logger.LogIf(ctx, applyDynamicConfig(ctx, newObjectLayerFn(), s))
}
// applyDynamicConfig will apply dynamic config values.
// Dynamic systems should be in config.SubSystemsDynamic as well.
func applyDynamicConfig(ctx context.Context, s config.Config) error {
func applyDynamicConfig(ctx context.Context, objAPI ObjectLayer, s config.Config) error {
if objAPI == nil {
return nil
}
// Read all dynamic configs.
// API
apiConfig, err := api.LookupConfig(s[config.APISubSys][config.Default])
@@ -571,28 +594,27 @@ func applyDynamicConfig(ctx context.Context, s config.Config) error {
if err != nil {
return fmt.Errorf("Unable to setup Compression: %w", err)
}
objAPI := newObjectLayerFn()
if objAPI != nil {
if cmpCfg.Enabled && !objAPI.IsCompressionSupported() {
return fmt.Errorf("Backend does not support compression")
}
// Validate if the object layer supports compression.
if cmpCfg.Enabled && !objAPI.IsCompressionSupported() {
return fmt.Errorf("Backend does not support compression")
}
// Heal
healCfg, err := heal.LookupConfig(s[config.HealSubSys][config.Default])
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to apply heal config: %w", err))
return fmt.Errorf("Unable to apply heal config: %w", err)
}
// Crawler
crawlerCfg, err := crawler.LookupConfig(s[config.CrawlerSubSys][config.Default])
// Scanner
scannerCfg, err := scanner.LookupConfig(s[config.ScannerSubSys][config.Default])
if err != nil {
return fmt.Errorf("Unable to apply crawler config: %w", err)
return fmt.Errorf("Unable to apply scanner config: %w", err)
}
// Apply configurations.
// We should not fail after this.
globalAPIConfig.init(apiConfig, globalAPIConfig.setDriveCount)
globalAPIConfig.init(apiConfig, objAPI.SetDriveCounts())
globalCompressConfigMu.Lock()
globalCompressConfig = cmpCfg
@@ -602,7 +624,7 @@ func applyDynamicConfig(ctx context.Context, s config.Config) error {
globalHealConfig = healCfg
globalHealConfigMu.Unlock()
logger.LogIf(ctx, crawlerSleeper.Update(crawlerCfg.Delay, crawlerCfg.MaxWait))
logger.LogIf(ctx, scannerSleeper.Update(scannerCfg.Delay, scannerCfg.MaxWait))
// Update all dynamic config values in memory.
globalServerConfigMu.Lock()
@@ -723,7 +745,7 @@ func loadConfig(objAPI ObjectLayer) error {
}
// Override any values from ENVs.
lookupConfigs(srvCfg, objAPI.SetDriveCount())
lookupConfigs(srvCfg, objAPI.SetDriveCounts())
// hold the mutex lock before a new config is assigned.
globalServerConfigMu.Lock()

View File

@@ -207,7 +207,7 @@ func migrateIAMConfigsEtcdToEncrypted(ctx context.Context, client *etcd.Client)
}
if encrypted && globalActiveCred.IsValid() && globalOldCred.IsValid() {
logger.Info("Rotation complete, please make sure to unset MINIO_ACCESS_KEY_OLD and MINIO_SECRET_KEY_OLD envs")
logger.Info("Rotation complete, please make sure to unset MINIO_ROOT_USER_OLD and MINIO_ROOT_PASSWORD_OLD envs")
}
return saveKeyEtcd(ctx, client, backendEncryptedFile, backendEncryptedMigrationComplete)
@@ -294,7 +294,7 @@ func migrateConfigPrefixToEncrypted(objAPI ObjectLayer, activeCredOld auth.Crede
}
if encrypted && globalActiveCred.IsValid() && activeCredOld.IsValid() {
logger.Info("Rotation complete, please make sure to unset MINIO_ACCESS_KEY_OLD and MINIO_SECRET_KEY_OLD envs")
logger.Info("Rotation complete, please make sure to unset MINIO_ROOT_USER_OLD and MINIO_ROOT_PASSWORD_OLD envs")
}
return saveConfig(GlobalContext, objAPI, backendEncryptedFile, backendEncryptedMigrationComplete)

View File

@@ -29,14 +29,14 @@ import (
// API sub-system constants
const (
apiRequestsMax = "requests_max"
apiRequestsDeadline = "requests_deadline"
apiClusterDeadline = "cluster_deadline"
apiCorsAllowOrigin = "cors_allow_origin"
apiRemoteTransportDeadline = "remote_transport_deadline"
apiListQuorum = "list_quorum"
apiExtendListCacheLife = "extend_list_cache_life"
apiRequestsMax = "requests_max"
apiRequestsDeadline = "requests_deadline"
apiClusterDeadline = "cluster_deadline"
apiCorsAllowOrigin = "cors_allow_origin"
apiRemoteTransportDeadline = "remote_transport_deadline"
apiListQuorum = "list_quorum"
apiExtendListCacheLife = "extend_list_cache_life"
apiReplicationWorkers = "replication_workers"
EnvAPIRequestsMax = "MINIO_API_REQUESTS_MAX"
EnvAPIRequestsDeadline = "MINIO_API_REQUESTS_DEADLINE"
EnvAPIClusterDeadline = "MINIO_API_CLUSTER_DEADLINE"
@@ -45,6 +45,7 @@ const (
EnvAPIListQuorum = "MINIO_API_LIST_QUORUM"
EnvAPIExtendListCacheLife = "MINIO_API_EXTEND_LIST_CACHE_LIFE"
EnvAPISecureCiphers = "MINIO_API_SECURE_CIPHERS"
EnvAPIReplicationWorkers = "MINIO_API_REPLICATION_WORKERS"
)
// Deprecated key and ENVs
@@ -78,12 +79,16 @@ var (
},
config.KV{
Key: apiListQuorum,
Value: "optimal",
Value: "strict",
},
config.KV{
Key: apiExtendListCacheLife,
Value: "0s",
},
config.KV{
Key: apiReplicationWorkers,
Value: "100",
},
}
)
@@ -96,6 +101,7 @@ type Config struct {
RemoteTransportDeadline time.Duration `json:"remote_transport_deadline"`
ListQuorum string `json:"list_strict_quorum"`
ExtendListLife time.Duration `json:"extend_list_cache_life"`
ReplicationWorkers int `json:"replication_workers"`
}
// UnmarshalJSON - Validate SS and RRS parity when unmarshalling JSON.
@@ -113,8 +119,6 @@ func (sCfg *Config) UnmarshalJSON(data []byte) error {
// acceptable quorum expected for list operations
func (sCfg Config) GetListQuorum() int {
switch sCfg.ListQuorum {
case "optimal":
return 3
case "reduced":
return 2
case "disk":
@@ -123,7 +127,7 @@ func (sCfg Config) GetListQuorum() int {
case "strict":
return -1
}
// Defaults to 3 drives per set.
// Defaults to 3 drives per set, defaults to "optimal" value
return 3
}
@@ -175,6 +179,15 @@ func LookupConfig(kvs config.KVS) (cfg Config, err error) {
return cfg, err
}
replicationWorkers, err := strconv.Atoi(env.Get(EnvAPIReplicationWorkers, kvs.Get(apiReplicationWorkers)))
if err != nil {
return cfg, err
}
if replicationWorkers <= 0 {
return cfg, config.ErrInvalidReplicationWorkersValue(nil).Msg("Minimum number of replication workers should be 1")
}
return Config{
RequestsMax: requestsMax,
RequestsDeadline: requestsDeadline,
@@ -183,5 +196,6 @@ func LookupConfig(kvs config.KVS) (cfg Config, err error) {
RemoteTransportDeadline: remoteTransportDeadline,
ListQuorum: listQuorum,
ExtendListLife: listLife,
ReplicationWorkers: replicationWorkers,
}, nil
}

View File

@@ -45,5 +45,11 @@ var (
Optional: true,
Type: "duration",
},
config.HelpKV{
Key: apiReplicationWorkers,
Description: `set the number of replication workers, defaults to 100`,
Optional: true,
Type: "number",
},
}
)

View File

@@ -40,19 +40,13 @@ var (
},
config.HelpKV{
Key: Exclude,
Description: `comma separated wildcard exclusion patterns e.g. "bucket/*.tmp,*.exe"`,
Description: `exclude cache for following patterns e.g. "bucket/*.tmp,*.exe"`,
Optional: true,
Type: "csv",
},
config.HelpKV{
Key: config.Comment,
Description: config.DefaultComment,
Optional: true,
Type: "sentence",
},
config.HelpKV{
Key: After,
Description: `minimum accesses before caching an object`,
Description: `minimum number of access before caching an object`,
Optional: true,
Type: "number",
},
@@ -80,5 +74,11 @@ var (
Optional: true,
Type: "string",
},
config.HelpKV{
Key: config.Comment,
Description: config.DefaultComment,
Optional: true,
Type: "sentence",
},
}
)

View File

@@ -23,6 +23,7 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/pem"
"errors"
"io/ioutil"
"github.com/minio/minio/pkg/env"
@@ -113,3 +114,12 @@ func LoadX509KeyPair(certFile, keyFile string) (tls.Certificate, error) {
}
return cert, nil
}
// EnsureCertAndKey checks if both client certificate and key paths are provided
func EnsureCertAndKey(ClientCert, ClientKey string) error {
if (ClientCert != "" && ClientKey == "") ||
(ClientCert == "" && ClientKey != "") {
return errors.New("cert and key must be specified as a pair")
}
return nil
}

View File

@@ -26,23 +26,26 @@ import (
// Config represents the compression settings.
type Config struct {
Enabled bool `json:"enabled"`
Extensions []string `json:"extensions"`
MimeTypes []string `json:"mime-types"`
Enabled bool `json:"enabled"`
AllowEncrypted bool `json:"allow_encryption"`
Extensions []string `json:"extensions"`
MimeTypes []string `json:"mime-types"`
}
// Compression environment variables
const (
Extensions = "extensions"
MimeTypes = "mime_types"
Extensions = "extensions"
AllowEncrypted = "allow_encryption"
MimeTypes = "mime_types"
EnvCompressState = "MINIO_COMPRESS_ENABLE"
EnvCompressExtensions = "MINIO_COMPRESS_EXTENSIONS"
EnvCompressMimeTypes = "MINIO_COMPRESS_MIME_TYPES"
EnvCompressState = "MINIO_COMPRESS_ENABLE"
EnvCompressAllowEncryption = "MINIO_COMPRESS_ALLOW_ENCRYPTION"
EnvCompressExtensions = "MINIO_COMPRESS_EXTENSIONS"
EnvCompressMimeTypes = "MINIO_COMPRESS_MIME_TYPES"
// Include-list for compression.
DefaultExtensions = ".txt,.log,.csv,.json,.tar,.xml,.bin"
DefaultMimeTypes = "text/*,application/json,application/xml"
DefaultMimeTypes = "text/*,application/json,application/xml,binary/octet-stream"
)
// DefaultKVS - default KV config for compression settings
@@ -52,6 +55,10 @@ var (
Key: config.Enable,
Value: config.EnableOff,
},
config.KV{
Key: AllowEncrypted,
Value: config.EnableOff,
},
config.KV{
Key: Extensions,
Value: DefaultExtensions,
@@ -101,6 +108,12 @@ func LookupConfig(kvs config.KVS) (Config, error) {
return cfg, nil
}
allowEnc := env.Get(EnvCompressAllowEncryption, kvs.Get(AllowEncrypted))
cfg.AllowEncrypted, err = config.ParseBool(allowEnc)
if err != nil {
return cfg, err
}
compressExtensions := env.Get(EnvCompressExtensions, kvs.Get(Extensions))
compressMimeTypes := env.Get(EnvCompressMimeTypes, kvs.Get(MimeTypes))
compressMimeTypesLegacy := env.Get(EnvCompressMimeTypesLegacy, kvs.Get(MimeTypes))

View File

@@ -77,6 +77,7 @@ const (
LoggerWebhookSubSys = "logger_webhook"
AuditWebhookSubSys = "audit_webhook"
HealSubSys = "heal"
ScannerSubSys = "scanner"
CrawlerSubSys = "crawler"
// Add new constants here if you add new fields to config.
@@ -114,7 +115,7 @@ var SubSystems = set.CreateStringSet(
PolicyOPASubSys,
IdentityLDAPSubSys,
IdentityOpenIDSubSys,
CrawlerSubSys,
ScannerSubSys,
HealSubSys,
NotifyAMQPSubSys,
NotifyESSubSys,
@@ -132,7 +133,7 @@ var SubSystems = set.CreateStringSet(
var SubSystemsDynamic = set.CreateStringSet(
APISubSys,
CompressionSubSys,
CrawlerSubSys,
ScannerSubSys,
HealSubSys,
)
@@ -151,7 +152,7 @@ var SubSystemsSingleTargets = set.CreateStringSet([]string{
IdentityLDAPSubSys,
IdentityOpenIDSubSys,
HealSubSys,
CrawlerSubSys,
ScannerSubSys,
}...)
// Constant separators
@@ -462,6 +463,13 @@ func LookupWorm() (bool, error) {
return ParseBool(env.Get(EnvWorm, EnableOff))
}
// Carries all the renamed sub-systems from their
// previously known names
var renamedSubsys = map[string]string{
CrawlerSubSys: ScannerSubSys,
// Add future sub-system renames
}
// Merge - merges a new config with all the
// missing values for default configs,
// returns a config.
@@ -477,9 +485,21 @@ func (c Config) Merge() Config {
}
}
if _, ok := cp[subSys]; !ok {
// A config subsystem was removed or server was downgraded.
Logger.Info("config: ignoring unknown subsystem config %q\n", subSys)
continue
rnSubSys, ok := renamedSubsys[subSys]
if !ok {
// A config subsystem was removed or server was downgraded.
Logger.Info("config: ignoring unknown subsystem config %q\n", subSys)
continue
}
// Copy over settings from previous sub-system
// to newly renamed sub-system
for _, kv := range cp[rnSubSys][Default] {
_, ok := c[subSys][tgt].Lookup(kv.Key)
if !ok {
ckvs.Set(kv.Key, kv.Value)
}
}
subSys = rnSubSys
}
cp[subSys][tgt] = ckvs
}

View File

@@ -23,17 +23,23 @@ const (
// Top level common ENVs
const (
EnvAccessKey = "MINIO_ACCESS_KEY"
EnvSecretKey = "MINIO_SECRET_KEY"
EnvAccessKeyOld = "MINIO_ACCESS_KEY_OLD"
EnvSecretKeyOld = "MINIO_SECRET_KEY_OLD"
EnvBrowser = "MINIO_BROWSER"
EnvDomain = "MINIO_DOMAIN"
EnvRegionName = "MINIO_REGION_NAME"
EnvPublicIPs = "MINIO_PUBLIC_IPS"
EnvFSOSync = "MINIO_FS_OSYNC"
EnvArgs = "MINIO_ARGS"
EnvDNSWebhook = "MINIO_DNS_WEBHOOK_ENDPOINT"
EnvAccessKey = "MINIO_ACCESS_KEY"
EnvSecretKey = "MINIO_SECRET_KEY"
EnvRootUser = "MINIO_ROOT_USER"
EnvRootPassword = "MINIO_ROOT_PASSWORD"
EnvAccessKeyOld = "MINIO_ACCESS_KEY_OLD"
EnvSecretKeyOld = "MINIO_SECRET_KEY_OLD"
EnvRootUserOld = "MINIO_ROOT_USER_OLD"
EnvRootPasswordOld = "MINIO_ROOT_PASSWORD_OLD"
EnvBrowser = "MINIO_BROWSER"
EnvDomain = "MINIO_DOMAIN"
EnvRegionName = "MINIO_REGION_NAME"
EnvPublicIPs = "MINIO_PUBLIC_IPS"
EnvFSOSync = "MINIO_FS_OSYNC"
EnvArgs = "MINIO_ARGS"
EnvDNSWebhook = "MINIO_DNS_WEBHOOK_ENDPOINT"
EnvLogPosixTimes = "MINIO_LOG_POSIX_TIMES"
EnvLogPosixThresholdInMS = "MINIO_LOG_POSIX_THRESHOLD_MS"
EnvUpdate = "MINIO_UPDATE"

View File

@@ -34,6 +34,9 @@ import (
// ErrNoEntriesFound - Indicates no entries were found for the given key (directory)
var ErrNoEntriesFound = errors.New("No entries found for this key")
// ErrDomainMissing - Indicates domain is missing
var ErrDomainMissing = errors.New("domain is missing")
const etcdPathSeparator = "/"
// create a new coredns service record for the bucket.
@@ -57,9 +60,9 @@ func (c *CoreDNS) List() (map[string][]SrvRecord, error) {
var srvRecords = map[string][]SrvRecord{}
for _, domainName := range c.domainNames {
key := msg.Path(fmt.Sprintf("%s.", domainName), c.prefixPath)
records, err := c.list(key)
records, err := c.list(key+etcdPathSeparator, true)
if err != nil {
return nil, err
return srvRecords, err
}
for _, record := range records {
if record.Key == "" {
@@ -76,7 +79,7 @@ func (c *CoreDNS) Get(bucket string) ([]SrvRecord, error) {
var srvRecords []SrvRecord
for _, domainName := range c.domainNames {
key := msg.Path(fmt.Sprintf("%s.%s.", bucket, domainName), c.prefixPath)
records, err := c.list(key)
records, err := c.list(key, false)
if err != nil {
return nil, err
}
@@ -109,19 +112,25 @@ func msgUnPath(s string) string {
// Retrieves list of entries under the key passed.
// Note that this method fetches entries upto only two levels deep.
func (c *CoreDNS) list(key string) ([]SrvRecord, error) {
func (c *CoreDNS) list(key string, domain bool) ([]SrvRecord, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultContextTimeout)
r, err := c.etcdClient.Get(ctx, key, clientv3.WithPrefix())
defer cancel()
if err != nil {
return nil, err
}
if r.Count == 0 {
key = strings.TrimSuffix(key, etcdPathSeparator)
r, err = c.etcdClient.Get(ctx, key)
if err != nil {
return nil, err
}
// only if we are looking at `domain` as true
// we should return error here.
if domain && r.Count == 0 {
return nil, ErrDomainMissing
}
}
var srvRecords []SrvRecord
@@ -166,11 +175,11 @@ func (c *CoreDNS) Put(bucket string) error {
key = key + etcdPathSeparator + ip
ctx, cancel := context.WithTimeout(context.Background(), defaultContextTimeout)
_, err = c.etcdClient.Put(ctx, key, string(bucketMsg))
defer cancel()
cancel()
if err != nil {
ctx, cancel = context.WithTimeout(context.Background(), defaultContextTimeout)
c.etcdClient.Delete(ctx, key)
defer cancel()
cancel()
return err
}
}
@@ -182,18 +191,12 @@ func (c *CoreDNS) Put(bucket string) error {
func (c *CoreDNS) Delete(bucket string) error {
for _, domainName := range c.domainNames {
key := msg.Path(fmt.Sprintf("%s.%s.", bucket, domainName), c.prefixPath)
srvRecords, err := c.list(key)
ctx, cancel := context.WithTimeout(context.Background(), defaultContextTimeout)
_, err := c.etcdClient.Delete(ctx, key+etcdPathSeparator, clientv3.WithPrefix())
cancel()
if err != nil {
return err
}
for _, record := range srvRecords {
dctx, dcancel := context.WithTimeout(context.Background(), defaultContextTimeout)
if _, err = c.etcdClient.Delete(dctx, key+etcdPathSeparator+record.Host); err != nil {
dcancel()
return err
}
dcancel()
}
}
return nil
}
@@ -203,12 +206,12 @@ func (c *CoreDNS) DeleteRecord(record SrvRecord) error {
for _, domainName := range c.domainNames {
key := msg.Path(fmt.Sprintf("%s.%s.", record.Key, domainName), c.prefixPath)
dctx, dcancel := context.WithTimeout(context.Background(), defaultContextTimeout)
if _, err := c.etcdClient.Delete(dctx, key+etcdPathSeparator+record.Host); err != nil {
dcancel()
ctx, cancel := context.WithTimeout(context.Background(), defaultContextTimeout)
_, err := c.etcdClient.Delete(ctx, key+etcdPathSeparator+record.Host)
cancel()
if err != nil {
return err
}
dcancel()
}
return nil
}

View File

@@ -30,6 +30,12 @@ var (
"Can only accept `on` and `off` values. To enable O_SYNC for fs backend, set this value to `on`",
)
ErrOverlappingDomainValue = newErrFn(
"Overlapping domain values",
"Please check the passed value",
"MINIO_DOMAIN only accepts non-overlapping domain values",
)
ErrInvalidDomainValue = newErrFn(
"Invalid domain value",
"Please check the passed value",
@@ -116,19 +122,19 @@ var (
ErrInvalidRotatingCredentialsBackendEncrypted = newErrFn(
"Invalid rotating credentials",
"Please set correct rotating credentials in the environment for decryption",
`Detected encrypted config backend, correct old access and secret keys should be specified via environment variables MINIO_ACCESS_KEY_OLD and MINIO_SECRET_KEY_OLD to be able to re-encrypt the MinIO config, user IAM and policies with new credentials`,
`Detected encrypted config backend, correct old access and secret keys should be specified via environment variables MINIO_ROOT_USER_OLD and MINIO_ROOT_PASSWORD_OLD to be able to re-encrypt the MinIO config, user IAM and policies with new credentials`,
)
ErrInvalidCredentialsBackendEncrypted = newErrFn(
"Invalid credentials",
"Please set correct credentials in the environment for decryption",
`Detected encrypted config backend, correct access and secret keys should be specified via environment variables MINIO_ACCESS_KEY and MINIO_SECRET_KEY to be able to decrypt the MinIO config, user IAM and policies`,
`Detected encrypted config backend, correct access and secret keys should be specified via environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD to be able to decrypt the MinIO config, user IAM and policies`,
)
ErrMissingCredentialsBackendEncrypted = newErrFn(
"Credentials missing",
"Please set your credentials in the environment",
`Detected encrypted config backend, access and secret keys should be specified via environment variables MINIO_ACCESS_KEY and MINIO_SECRET_KEY to be able to decrypt the MinIO config, user IAM and policies`,
`Detected encrypted config backend, access and secret keys should be specified via environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD to be able to decrypt the MinIO config, user IAM and policies`,
)
ErrInvalidCredentials = newErrFn(
@@ -140,13 +146,13 @@ var (
ErrEnvCredentialsMissingGateway = newErrFn(
"Credentials missing",
"Please set your credentials in the environment",
`In Gateway mode, access and secret keys should be specified via environment variables MINIO_ACCESS_KEY and MINIO_SECRET_KEY respectively`,
`In Gateway mode, access and secret keys should be specified via environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD respectively`,
)
ErrEnvCredentialsMissingDistributed = newErrFn(
"Credentials missing",
"Please set your credentials in the environment",
`In distributed server mode, access and secret keys should be specified via environment variables MINIO_ACCESS_KEY and MINIO_SECRET_KEY respectively`,
`In distributed server mode, access and secret keys should be specified via environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD respectively`,
)
ErrInvalidErasureEndpoints = newErrFn(
@@ -275,4 +281,10 @@ Example 1:
"",
"Refer to https://docs.min.io/docs/minio-kms-quickstart-guide.html for setting up SSE",
)
ErrInvalidReplicationWorkersValue = newErrFn(
"Invalid value for replication workers",
"",
"MINIO_API_REPLICATION_WORKERS: should be > 0",
)
)

View File

@@ -27,6 +27,7 @@ import (
xnet "github.com/minio/minio/pkg/net"
"go.etcd.io/etcd/clientv3"
"go.etcd.io/etcd/clientv3/namespace"
"go.uber.org/zap"
)
const (
@@ -144,6 +145,13 @@ func LookupConfig(kvs config.KVS, rootCAs *x509.CertPool) (Config, error) {
cfg.Enabled = true
cfg.DialTimeout = defaultDialTimeout
cfg.DialKeepAliveTime = defaultDialKeepAlive
// Disable etcd client SDK logging, etcd client
// incorrectly starts logging in unexpected data
// format.
cfg.LogConfig = &zap.Config{
Level: zap.NewAtomicLevelAt(zap.FatalLevel),
Encoding: "console",
}
cfg.Endpoints = etcdEndpoints
cfg.CoreDNSPath = env.Get(EnvEtcdCoreDNSPath, kvs.Get(CoreDNSPath))
// Default path prefix for all keys on etcd, other than CoreDNSPath.

View File

@@ -66,7 +66,7 @@ var (
Help = config.HelpKVS{
config.HelpKV{
Key: Bitrot,
Description: `perform bitrot scan on disks when checking objects during crawl`,
Description: `perform bitrot scan on disks when checking objects during scanner`,
Optional: true,
Type: "on|off",
},

View File

@@ -21,17 +21,20 @@ import (
"crypto/x509"
"errors"
"fmt"
"math/rand"
"net"
"strings"
"time"
ldap "github.com/go-ldap/ldap/v3"
"github.com/minio/minio/cmd/config"
"github.com/minio/minio/pkg/env"
ldap "gopkg.in/ldap.v3"
)
const (
defaultLDAPExpiry = time.Hour * 1
dnDelimiter = ";"
)
// Config contains AD/LDAP server connectivity information.
@@ -45,50 +48,65 @@ type Config struct {
STSExpiryDuration string `json:"stsExpiryDuration"`
// Format string for usernames
UsernameFormat string `json:"usernameFormat"`
UsernameFormats []string `json:"-"`
UsernameSearchFilter string `json:"-"`
UsernameSearchBaseDNS []string `json:"-"`
UsernameFormat string `json:"usernameFormat"`
UsernameFormats []string `json:"-"`
GroupSearchBaseDN string `json:"groupSearchBaseDN"`
GroupSearchBaseDNS []string `json:"-"`
GroupSearchFilter string `json:"groupSearchFilter"`
GroupNameAttribute string `json:"groupNameAttribute"`
// User DN search parameters
UserDNSearchBaseDN string `json:"userDNSearchBaseDN"`
UserDNSearchFilter string `json:"userDNSearchFilter"`
// Group search parameters
GroupSearchBaseDistName string `json:"groupSearchBaseDN"`
GroupSearchBaseDistNames []string `json:"-"`
GroupSearchFilter string `json:"groupSearchFilter"`
// Lookup bind LDAP service account
LookupBindDN string `json:"lookupBindDN"`
LookupBindPassword string `json:"lookupBindPassword"`
stsExpiryDuration time.Duration // contains converted value
tlsSkipVerify bool // allows skipping TLS verification
serverInsecure bool // allows plain text connection to LDAP Server
serverStartTLS bool // allows plain text connection to LDAP Server
serverInsecure bool // allows plain text connection to LDAP server
serverStartTLS bool // allows using StartTLS connection to LDAP server
isUsingLookupBind bool
rootCAs *x509.CertPool
}
// LDAP keys and envs.
const (
ServerAddr = "server_addr"
STSExpiry = "sts_expiry"
UsernameFormat = "username_format"
UsernameSearchFilter = "username_search_filter"
UsernameSearchBaseDN = "username_search_base_dn"
GroupSearchFilter = "group_search_filter"
GroupNameAttribute = "group_name_attribute"
GroupSearchBaseDN = "group_search_base_dn"
TLSSkipVerify = "tls_skip_verify"
ServerInsecure = "server_insecure"
ServerStartTLS = "server_starttls"
ServerAddr = "server_addr"
STSExpiry = "sts_expiry"
LookupBindDN = "lookup_bind_dn"
LookupBindPassword = "lookup_bind_password"
UserDNSearchBaseDN = "user_dn_search_base_dn"
UserDNSearchFilter = "user_dn_search_filter"
UsernameFormat = "username_format"
GroupSearchFilter = "group_search_filter"
GroupSearchBaseDN = "group_search_base_dn"
TLSSkipVerify = "tls_skip_verify"
ServerInsecure = "server_insecure"
ServerStartTLS = "server_starttls"
EnvServerAddr = "MINIO_IDENTITY_LDAP_SERVER_ADDR"
EnvSTSExpiry = "MINIO_IDENTITY_LDAP_STS_EXPIRY"
EnvTLSSkipVerify = "MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY"
EnvServerInsecure = "MINIO_IDENTITY_LDAP_SERVER_INSECURE"
EnvServerStartTLS = "MINIO_IDENTITY_LDAP_SERVER_STARTTLS"
EnvUsernameFormat = "MINIO_IDENTITY_LDAP_USERNAME_FORMAT"
EnvUsernameSearchFilter = "MINIO_IDENTITY_LDAP_USERNAME_SEARCH_FILTER"
EnvUsernameSearchBaseDN = "MINIO_IDENTITY_LDAP_USERNAME_SEARCH_BASE_DN"
EnvGroupSearchFilter = "MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER"
EnvGroupNameAttribute = "MINIO_IDENTITY_LDAP_GROUP_NAME_ATTRIBUTE"
EnvGroupSearchBaseDN = "MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN"
EnvServerAddr = "MINIO_IDENTITY_LDAP_SERVER_ADDR"
EnvSTSExpiry = "MINIO_IDENTITY_LDAP_STS_EXPIRY"
EnvTLSSkipVerify = "MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY"
EnvServerInsecure = "MINIO_IDENTITY_LDAP_SERVER_INSECURE"
EnvServerStartTLS = "MINIO_IDENTITY_LDAP_SERVER_STARTTLS"
EnvUsernameFormat = "MINIO_IDENTITY_LDAP_USERNAME_FORMAT"
EnvUserDNSearchBaseDN = "MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN"
EnvUserDNSearchFilter = "MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER"
EnvGroupSearchFilter = "MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER"
EnvGroupSearchBaseDN = "MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN"
EnvLookupBindDN = "MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN"
EnvLookupBindPassword = "MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD"
)
var removedKeys = []string{
"username_search_filter",
"username_search_base_dn",
"group_name_attribute",
}
// DefaultKVS - default config for LDAP config
var (
DefaultKVS = config.KVS{
@@ -101,21 +119,17 @@ var (
Value: "",
},
config.KV{
Key: UsernameSearchFilter,
Key: UserDNSearchBaseDN,
Value: "",
},
config.KV{
Key: UsernameSearchBaseDN,
Key: UserDNSearchFilter,
Value: "",
},
config.KV{
Key: GroupSearchFilter,
Value: "",
},
config.KV{
Key: GroupNameAttribute,
Value: "",
},
config.KV{
Key: GroupSearchBaseDN,
Value: "",
@@ -136,117 +150,191 @@ var (
Key: ServerStartTLS,
Value: config.EnableOff,
},
config.KV{
Key: LookupBindDN,
Value: "",
},
config.KV{
Key: LookupBindPassword,
Value: "",
},
}
)
const (
dnDelimiter = ";"
)
func getGroups(conn *ldap.Conn, sreq *ldap.SearchRequest) ([]string, error) {
var groups []string
sres, err := conn.Search(sreq)
if err != nil {
// Check if there is no matching result and return empty slice.
// Ref: https://ldap.com/ldap-result-code-reference/
if ldap.IsErrorWithCode(err, 32) {
return nil, nil
}
return nil, err
}
for _, entry := range sres.Entries {
// We only queried one attribute,
// so we only look up the first one.
groups = append(groups, entry.Attributes[0].Values...)
groups = append(groups, entry.DN)
}
return groups, nil
}
func (l *Config) bind(conn *ldap.Conn, username, password string) ([]string, error) {
var bindDNS = make([]string, len(l.UsernameFormats))
func (l *Config) lookupBind(conn *ldap.Conn) error {
var err error
if l.LookupBindPassword == "" {
err = conn.UnauthenticatedBind(l.LookupBindDN)
} else {
err = conn.Bind(l.LookupBindDN, l.LookupBindPassword)
}
if ldap.IsErrorWithCode(err, 49) {
return fmt.Errorf("LDAP Lookup Bind user invalid credentials error: %v", err)
}
return err
}
// usernameFormatsBind - Iterates over all given username formats and expects
// that only one will succeed if the credentials are valid. The succeeding
// bindDN is returned or an error.
//
// In the rare case that multiple username formats succeed, implying that two
// (or more) distinct users in the LDAP directory have the same username and
// password, we return an error as we cannot identify the account intended by
// the user.
func (l *Config) usernameFormatsBind(conn *ldap.Conn, username, password string) (string, error) {
var bindDistNames []string
var errs = make([]error, len(l.UsernameFormats))
var successCount = 0
for i, usernameFormat := range l.UsernameFormats {
bindDN := fmt.Sprintf(usernameFormat, username)
// Bind with user credentials to validate the password
if err := conn.Bind(bindDN, password); err != nil {
return nil, err
errs[i] = conn.Bind(bindDN, password)
if errs[i] == nil {
bindDistNames = append(bindDistNames, bindDN)
successCount++
} else if !ldap.IsErrorWithCode(errs[i], 49) {
return "", fmt.Errorf("LDAP Bind request failed with unexpected error: %v", errs[i])
}
bindDNS[i] = bindDN
}
return bindDNS, nil
if successCount == 0 {
var errStrings []string
for _, err := range errs {
if err != nil {
errStrings = append(errStrings, err.Error())
}
}
outErr := fmt.Sprintf("All username formats failed due to invalid credentials: %s", strings.Join(errStrings, "; "))
return "", errors.New(outErr)
}
if successCount > 1 {
successDistNames := strings.Join(bindDistNames, ", ")
errMsg := fmt.Sprintf("Multiple username formats succeeded - ambiguous user login (succeeded for: %s)", successDistNames)
return "", errors.New(errMsg)
}
return bindDistNames[0], nil
}
var standardAttributes = []string{
"givenName",
"sn",
"cn",
"memberOf",
"email",
// lookupUserDN searches for the DN of the user given their username. conn is
// assumed to be using the lookup bind service account. It is required that the
// search result in at most one result.
func (l *Config) lookupUserDN(conn *ldap.Conn, username string) (string, error) {
filter := strings.Replace(l.UserDNSearchFilter, "%s", ldap.EscapeFilter(username), -1)
searchRequest := ldap.NewSearchRequest(
l.UserDNSearchBaseDN,
ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false,
filter,
[]string{}, // only need DN, so no pass no attributes here
nil,
)
searchResult, err := conn.Search(searchRequest)
if err != nil {
return "", err
}
if len(searchResult.Entries) == 0 {
return "", fmt.Errorf("User DN for %s not found", username)
}
if len(searchResult.Entries) != 1 {
return "", fmt.Errorf("Multiple DNs for %s found - please fix the search filter", username)
}
return searchResult.Entries[0].DN, nil
}
// Bind - binds to ldap, searches LDAP and returns list of groups.
func (l *Config) Bind(username, password string) ([]string, error) {
// Bind - binds to ldap, searches LDAP and returns the distinguished name of the
// user and the list of groups.
func (l *Config) Bind(username, password string) (string, []string, error) {
conn, err := l.Connect()
if err != nil {
return nil, err
return "", nil, err
}
defer conn.Close()
bindDNS, err := l.bind(conn, username, password)
if err != nil {
return nil, err
var bindDN string
if l.isUsingLookupBind {
// Bind to the lookup user account
if err = l.lookupBind(conn); err != nil {
return "", nil, err
}
// Lookup user DN
bindDN, err = l.lookupUserDN(conn, username)
if err != nil {
errRet := fmt.Errorf("Unable to find user DN: %s", err)
return "", nil, errRet
}
// Authenticate the user credentials.
err = conn.Bind(bindDN, password)
if err != nil {
errRet := fmt.Errorf("LDAP auth failed for DN %s: %v", bindDN, err)
return "", nil, errRet
}
// Bind to the lookup user account again to perform group search.
if err = l.lookupBind(conn); err != nil {
return "", nil, err
}
} else {
// Verify login credentials by checking the username formats.
bindDN, err = l.usernameFormatsBind(conn, username, password)
if err != nil {
return "", nil, err
}
// Bind to the successful bindDN again.
err = conn.Bind(bindDN, password)
if err != nil {
errRet := fmt.Errorf("LDAP conn failed though auth for DN %s succeeded: %v", bindDN, err)
return "", nil, errRet
}
}
// User groups lookup.
var groups []string
if l.UsernameSearchFilter != "" {
for _, userSearchBase := range l.UsernameSearchBaseDNS {
filter := strings.Replace(l.UsernameSearchFilter, "%s",
ldap.EscapeFilter(username), -1)
if l.GroupSearchFilter != "" {
for _, groupSearchBase := range l.GroupSearchBaseDistNames {
filter := strings.Replace(l.GroupSearchFilter, "%s", ldap.EscapeFilter(username), -1)
filter = strings.Replace(filter, "%d", ldap.EscapeFilter(bindDN), -1)
searchRequest := ldap.NewSearchRequest(
userSearchBase,
groupSearchBase,
ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false,
filter,
standardAttributes,
nil,
nil,
)
groups, err = getGroups(conn, searchRequest)
var newGroups []string
newGroups, err = getGroups(conn, searchRequest)
if err != nil {
return nil, err
errRet := fmt.Errorf("Error finding groups of %s: %v", bindDN, err)
return "", nil, errRet
}
groups = append(groups, newGroups...)
}
}
if l.GroupSearchFilter != "" {
for _, groupSearchBase := range l.GroupSearchBaseDNS {
var filters []string
if l.GroupNameAttribute == "" {
filters = []string{strings.Replace(l.GroupSearchFilter, "%s",
ldap.EscapeFilter(username), -1)}
} else {
// With group name attribute specified, make sure to
// include search queries for CN distinguished name
for _, bindDN := range bindDNS {
filters = append(filters, strings.Replace(l.GroupSearchFilter, "%s",
ldap.EscapeFilter(bindDN), -1))
}
}
for _, filter := range filters {
searchRequest := ldap.NewSearchRequest(
groupSearchBase,
ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false,
filter,
standardAttributes,
nil,
)
var newGroups []string
newGroups, err = getGroups(conn, searchRequest)
if err != nil {
return nil, err
}
groups = append(groups, newGroups...)
}
}
}
return groups, nil
return bindDN, groups, nil
}
// Connect connect to ldap server.
@@ -287,6 +375,38 @@ func (l Config) GetExpiryDuration() time.Duration {
return l.stsExpiryDuration
}
func (l Config) testConnection() error {
conn, err := l.Connect()
if err != nil {
return fmt.Errorf("Error creating connection to LDAP server: %v", err)
}
defer conn.Close()
if l.isUsingLookupBind {
if err = l.lookupBind(conn); err != nil {
return fmt.Errorf("Error connecting as LDAP Lookup Bind user: %v", err)
}
return nil
}
// Generate some random user credentials for username formats mode test.
username := fmt.Sprintf("sometestuser%09d", rand.Int31n(1000000000))
charset := []byte("abcdefghijklmnopqrstuvwxyz" +
"ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")
rand.Shuffle(len(charset), func(i, j int) {
charset[i], charset[j] = charset[j], charset[i]
})
password := string(charset[:20])
_, err = l.usernameFormatsBind(conn, username, password)
if err == nil {
// We don't expect to successfully guess a credential in this
// way.
return fmt.Errorf("Unexpected random credentials success for user=%s password=%s", username, password)
} else if strings.HasPrefix(err.Error(), "All username formats failed due to invalid credentials: ") {
return nil
}
return fmt.Errorf("LDAP connection test error: %v", err)
}
// Enabled returns if jwks is enabled.
func Enabled(kvs config.KVS) bool {
return kvs.Get(ServerAddr) != ""
@@ -295,6 +415,12 @@ func Enabled(kvs config.KVS) bool {
// Lookup - initializes LDAP config, overrides config, if any ENV values are set.
func Lookup(kvs config.KVS, rootCAs *x509.CertPool) (l Config, err error) {
l = Config{}
// Purge all removed keys first
for _, k := range removedKeys {
kvs.Delete(k)
}
if err = config.CheckValidKeys(config.IdentityLDAPSubSys, kvs, DefaultKVS); err != nil {
return l, err
}
@@ -316,6 +442,8 @@ func Lookup(kvs config.KVS, rootCAs *x509.CertPool) (l Config, err error) {
l.STSExpiryDuration = v
l.stsExpiryDuration = expDur
}
// LDAP connection configuration
if v := env.Get(EnvServerInsecure, kvs.Get(ServerInsecure)); v != "" {
l.serverInsecure, err = config.ParseBool(v)
if err != nil {
@@ -334,43 +462,62 @@ func Lookup(kvs config.KVS, rootCAs *x509.CertPool) (l Config, err error) {
return l, err
}
}
// Lookup bind user configuration
lookupBindDN := env.Get(EnvLookupBindDN, kvs.Get(LookupBindDN))
lookupBindPassword := env.Get(EnvLookupBindPassword, kvs.Get(LookupBindPassword))
if lookupBindDN != "" {
l.LookupBindDN = lookupBindDN
l.LookupBindPassword = lookupBindPassword
l.isUsingLookupBind = true
// User DN search configuration
userDNSearchBaseDN := env.Get(EnvUserDNSearchBaseDN, kvs.Get(UserDNSearchBaseDN))
userDNSearchFilter := env.Get(EnvUserDNSearchFilter, kvs.Get(UserDNSearchFilter))
if userDNSearchFilter == "" || userDNSearchBaseDN == "" {
return l, errors.New("In lookup bind mode, userDN search base DN and userDN search filter are both required")
}
l.UserDNSearchBaseDN = userDNSearchBaseDN
l.UserDNSearchFilter = userDNSearchFilter
}
// Username format configuration.
if v := env.Get(EnvUsernameFormat, kvs.Get(UsernameFormat)); v != "" {
if !strings.Contains(v, "%s") {
return l, errors.New("LDAP username format doesn't have '%s' substitution")
}
l.UsernameFormats = strings.Split(v, dnDelimiter)
} else {
return l, fmt.Errorf("'%s' cannot be empty and must have a value", UsernameFormat)
}
if v := env.Get(EnvUsernameSearchFilter, kvs.Get(UsernameSearchFilter)); v != "" {
if !strings.Contains(v, "%s") {
return l, errors.New("LDAP username search filter doesn't have '%s' substitution")
}
l.UsernameSearchFilter = v
// Either lookup bind mode or username format is supported, but not
// both.
if l.isUsingLookupBind && len(l.UsernameFormats) > 0 {
return l, errors.New("Lookup Bind mode and Username Format mode are not supported at the same time")
}
if v := env.Get(EnvUsernameSearchBaseDN, kvs.Get(UsernameSearchBaseDN)); v != "" {
l.UsernameSearchBaseDNS = strings.Split(v, dnDelimiter)
// At least one of bind mode or username format must be used.
if !l.isUsingLookupBind && len(l.UsernameFormats) == 0 {
return l, errors.New("Either Lookup Bind mode or Username Format mode is required.")
}
// Test connection to LDAP server.
if err := l.testConnection(); err != nil {
return l, fmt.Errorf("Connection test for LDAP server failed: %v", err)
}
// Group search params configuration
grpSearchFilter := env.Get(EnvGroupSearchFilter, kvs.Get(GroupSearchFilter))
grpSearchNameAttr := env.Get(EnvGroupNameAttribute, kvs.Get(GroupNameAttribute))
grpSearchBaseDN := env.Get(EnvGroupSearchBaseDN, kvs.Get(GroupSearchBaseDN))
// Either all group params must be set or none must be set.
var allSet bool
if grpSearchFilter != "" {
if grpSearchNameAttr == "" || grpSearchBaseDN == "" {
return l, errors.New("All group related parameters must be set")
}
allSet = true
if (grpSearchFilter != "" && grpSearchBaseDN == "") || (grpSearchFilter == "" && grpSearchBaseDN != "") {
return l, errors.New("All group related parameters must be set")
}
if allSet {
if grpSearchFilter != "" {
l.GroupSearchFilter = grpSearchFilter
l.GroupNameAttribute = grpSearchNameAttr
l.GroupSearchBaseDNS = strings.Split(grpSearchBaseDN, dnDelimiter)
l.GroupSearchBaseDistName = grpSearchBaseDN
l.GroupSearchBaseDistNames = strings.Split(l.GroupSearchBaseDistName, dnDelimiter)
}
l.rootCAs = rootCAs

View File

@@ -27,43 +27,53 @@ var (
Type: "address",
},
config.HelpKV{
Key: UsernameFormat,
Description: `";" separated list of username bind DNs e.g. "uid=%s,cn=accounts,dc=myldapserver,dc=com"`,
Type: "list",
Key: STSExpiry,
Description: `temporary credentials validity duration in s,m,h,d. Default is "1h"`,
Optional: true,
Type: "duration",
},
config.HelpKV{
Key: UsernameSearchFilter,
Description: `user search filter, for example "(cn=%s)" or "(sAMAccountName=%s)" or "(uid=%s)"`,
Key: LookupBindDN,
Description: `DN for LDAP read-only service account used to perform DN and group lookups`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: LookupBindPassword,
Description: `Password for LDAP read-only service account used to perform DN and group lookups`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: UserDNSearchBaseDN,
Description: `Base LDAP DN to search for user DN`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: UserDNSearchFilter,
Description: `Search filter to lookup user DN`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: UsernameFormat,
Description: `";" separated list of username bind DNs e.g. "uid=%s,cn=accounts,dc=myldapserver,dc=com"`,
Optional: true,
Type: "list",
},
config.HelpKV{
Key: GroupSearchFilter,
Description: `search filter for groups e.g. "(&(objectclass=groupOfNames)(memberUid=%s))"`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: GroupSearchBaseDN,
Description: `";" separated list of group search base DNs e.g. "dc=myldapserver,dc=com"`,
Optional: true,
Type: "list",
},
config.HelpKV{
Key: UsernameSearchBaseDN,
Description: `";" separated list of username search DNs`,
Type: "list",
Optional: true,
},
config.HelpKV{
Key: GroupNameAttribute,
Description: `search attribute for group name e.g. "cn"`,
Optional: true,
Type: "string",
},
config.HelpKV{
Key: STSExpiry,
Description: `temporary credentials validity duration in s,m,h,d. Default is "1h"`,
Optional: true,
Type: "duration",
},
config.HelpKV{
Key: TLSSkipVerify,
Description: `trust server TLS without verification, defaults to "off" (verify)`,
@@ -76,6 +86,12 @@ var (
Optional: true,
Type: "on|off",
},
config.HelpKV{
Key: ServerStartTLS,
Description: `use StartTLS connection to AD/LDAP server, defaults to "off"`,
Optional: true,
Type: "on|off",
},
config.HelpKV{
Key: config.Comment,
Description: config.DefaultComment,

View File

@@ -41,13 +41,9 @@ func SetIdentityLDAP(s config.Config, ldapArgs Config) {
Key: GroupSearchFilter,
Value: ldapArgs.GroupSearchFilter,
},
config.KV{
Key: GroupNameAttribute,
Value: ldapArgs.GroupNameAttribute,
},
config.KV{
Key: GroupSearchBaseDN,
Value: ldapArgs.GroupSearchBaseDN,
Value: ldapArgs.GroupSearchBaseDistName,
},
}
}

View File

@@ -1,18 +1,18 @@
/*
* MinIO Cloud Storage, (C) 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// MinIO Cloud Storage, (C) 2020 MinIO, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !fips
package openid
@@ -22,7 +22,7 @@ import (
"github.com/dgrijalva/jwt-go"
// Needed for SHA3 to work - See: https://golang.org/src/crypto/crypto.go?s=1034:1288
_ "golang.org/x/crypto/sha3"
_ "golang.org/x/crypto/sha3" // There is no SHA-3 FIPS-140 2 compliant implementation
)
// Specific instances for EC256 and company

Some files were not shown because too many files have changed in this diff Show More