Compare commits

...

1644 Commits

Author SHA1 Message Date
Harshavardhana
0a3d43273f vendor: sha256 32bit updated. (#2459) 2016-08-16 16:19:29 -07:00
Harshavardhana
4dec50ba51 build: Add platform specific fixes. 2016-08-16 14:40:41 -07:00
Krishna Srinivas
f2bffe6086 fs/delete-object: fs.json filepath was incorrect. (#2448) 2016-08-16 10:05:08 -07:00
Krishna Srinivas
8e2f64aea4 fs/multipart: save metadata(fs.json) for multipart uploads. (#2450) 2016-08-16 10:04:40 -07:00
Harshavardhana
c054e633fd utils: Shutdown channel should be bufferred. 2016-08-15 21:01:24 -07:00
Harshavardhana
e86dfcf41e api: Change listen bucket notification to be TopicConfiguration. (#2447) 2016-08-15 20:56:43 -07:00
Anis Elleuch
3b9dbd748b tests: Web handlers (#2429) 2016-08-15 16:13:03 -07:00
Harshavardhana
3d1bb8f439 tests: Fix hasExtendedHeader tests with env variable. 2016-08-15 16:09:08 -07:00
Krishna Srinivas
bb8a425d49 When updating the meta file, write to temp file first and then rename to the actual location.
This prevents appending the metadata to the metadata-file when a file is reuploaded.
2016-08-15 15:55:59 -07:00
Harshavardhana
0e745fdb05 fs: Enable fs.json with env MINIO_ENABLE_FSMETA 2016-08-15 15:53:48 -07:00
Anis Elleuch
51d7749c3e Check if eventN is initialized before notifying in Upload web handler (#2435) 2016-08-15 12:15:46 -07:00
Harshavardhana
76d56c6ff2 typo: Fix typos across the codebase. (#2442) 2016-08-15 02:44:48 -07:00
Harshavardhana
b41bfcbf2f utils: Fix unit tests issue. (#2441) 2016-08-15 01:59:28 -07:00
Yurii Rashkovskii
341171f326 Problem: AWS documentation defines event timestamp as 1970-01-01T00:00:00.000Z (#2440)
While Minio is using 20160814T124605Z

(See http://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html)

Solution: adhere to AWS documentation
2016-08-15 01:50:07 -07:00
karthic rao
a3592228f5 bug-fix: fix for tests failure when cache is disabled (#2439) 2016-08-15 01:25:41 -07:00
Anis Elleuch
5526ac13d2 Protect shutdown callbacks lists with a mutex (#2432) 2016-08-14 23:55:48 -07:00
Harshavardhana
9606cb9bcd posix: Disk free verification should have relaxed handling of inodes. (#2431)
Some filesystems do not implement a way to provide total inodes available, instead inodes
are allocated based on available disk space. For example CephFS, StoreNext CVSFS, AzureFile
driver. Allow for the available disk to be separately validate and we will validate inodes
only if the total inodes are provided by the underlying filesystem.

Fixes #2364
2016-08-13 02:30:15 -07:00
Yurii Rashkovskii
7829ccea2c Routing key was misspelled as routine key (#2430) 2016-08-12 22:23:06 -07:00
Anis Elleuch
723153951c Test api responses (#2427)
* Test List Multipart Uploads with correct max-keys

* Test List Objects V1 V2 with valid parameters
2016-08-12 11:28:27 -07:00
Anis Elleuch
64dc2a2e7f Heal format by inspection should avoid picking minioMetaBucket dir if the latter comes first in the list (listDir not ordered) (#2426) 2016-08-12 08:38:04 -07:00
Anis Elleuch
fdab984c8d Add test for fresh disks healing (#2424) 2016-08-12 08:36:43 -07:00
karthic rao
70fd38818e clean up: ineffassign fixes (#2411) 2016-08-12 00:26:30 -07:00
Jesse Lucas
ef0a108dde Graceful shutdown for ServerMux (#2341) 2016-08-11 21:33:55 -07:00
karthic rao
0b225269e1 tests: posix: tests cleaning up and enhancing coverage. (#2410) 2016-08-11 19:57:14 -07:00
Anis Elleuch
fe62688683 Add tests for Damerau Levenshtein algorithm (#2407) 2016-08-11 17:50:04 -07:00
Anis Elleuch
fadb71351c Test Post policy parsing and checking conditions (#2408) 2016-08-11 17:49:40 -07:00
Matthieu Fronton
402af93da2 Update how-to-install-golang URL (#2406) 2016-08-11 12:01:12 -07:00
Harshavardhana
d1bb8a5b21 api: refactor the bucket policy reading and writing. (#2395)
Policies are read once during server startup and subsequently
managed through in memory map. In-memory map is updated as
and when there are new changes coming in.
2016-08-10 20:10:47 -07:00
Harshavardhana
97c1289659 handlers: GetObject and HeadObject support more responses. (#2403)
- response-content-encoding.
- response-content-language.

Fixes #2393
2016-08-10 17:36:28 -07:00
Harshavardhana
8274ac2e5a tests: Make sure we try tests on free ports. (#2402)
Fixes #2376
2016-08-10 16:28:05 -07:00
Brendan Ashworth
758aa21b9c tests: add tests for certs.go and trie.go (#2394) 2016-08-10 02:26:40 -07:00
koolhead17
0dc5662f9b Doc: Fixed screenshot path for modified docs. (#2390) 2016-08-09 15:04:44 -07:00
Harshavardhana
82cd38e959 handlers: Remove 'notification.xml' when bucket is deleted. (#2389)
Do not pass around objectHandlers object, input argument
should comply to a type for only that would be used inside
the function body.
2016-08-09 11:33:45 -07:00
karthic rao
e0cf4ee9fc presignV4: fix errors response and tests. (#2375)
- Fix error response when one of the query params in the presign URL is
  missing.

- Exhasutive test coverage for presignv4.
2016-08-09 09:13:15 -07:00
Dee Koder
2a920e568c docs: Readded code coverage badge in github Readme. (#2391)
We have the fix in place to hide this on docs.minio.io
2016-08-08 22:14:38 -07:00
Harshavardhana
9c7f75d1e7 handler: Remove unused accesslog handler (#2388) 2016-08-08 21:33:21 -07:00
Harshavardhana
7e46055a15 api/handlers: Implement streaming signature v4 support. (#2370)
* api/handlers: Implement streaming signature v4 support.

Fixes #2326

* tests: Add tests for quick/safe
2016-08-08 20:56:29 -07:00
koolhead17
0c125f3596 Doc: This patch new guides with titles mentioned below (#2382) 2016-08-08 19:39:01 -07:00
GarimaKapoor
a1f3bf57c7 Update README.md 2016-08-07 11:08:42 -07:00
Harshavardhana
0188cd0b84 utils: Take monitorShutdownSignal to take an exitFunc which would executed upon error. (#2378)
This hook approach allows program to keep running but being able to handle exiting
of the program in the dynamic way.

Fixes #2377
2016-08-06 23:53:10 -07:00
Harshavardhana
b23605a2b5 pkg/objcache: Add more tests. (#2371) 2016-08-06 10:22:14 -07:00
koolhead17
8c2985a9f5 Doc: README.md/Removed codecov badge from title. (#2367) 2016-08-05 16:53:29 -07:00
Anis Elleuch
d28fb5fe23 Add a generic registerShutdown function for graceful exit (#2344)
* Add a generic registerShutdown function for graceful exit
* Add shutdown callback test case
2016-08-05 13:48:31 -07:00
GarimaKapoor
62c0612eac Update README.md 2016-08-05 11:35:03 -07:00
karthic rao
d63ce9d60d tests: tests for signature v4 parser (#2362) 2016-08-05 08:03:18 -07:00
Remco Verhoef
5a44c34fd7 fixed some issues in readme (#2363)
thx to @MartijnVogel_twitter
2016-08-05 08:02:11 -07:00
Harshavardhana
064c51162d api: Add new ListenBucketNotificationHandler. (#2336)
This API is precursor before implementing `minio lambda` and `mc` continous replication.

This new api is an extention to BucketNofication APIs.

// Request
```
GET /bucket?notificationARN=arn:minio:lambda:us-east-1:10:minio HTTP/1.1
...
...
```

// Response
```

{"Records": ...}
...
...
...
{"Records": ...}
```
2016-08-04 22:01:58 -07:00
Harshavardhana
90c20a8c11 Add codecov for minio. (#2359) 2016-08-04 16:48:50 -07:00
Harshavardhana
e783d77c3d Add codecov for minio. 2016-08-04 15:56:26 -07:00
Krishna Srinivas
e887fea485 getbucketlocation: should handle UNSIGNED-PAYLOAD for sha256 header for signature calculation. (#2358)
fixes #2355
2016-08-04 15:49:35 -07:00
Harshavardhana
de5d5ff241 pkg/crypto: Deprecate cgo sha256 version. (#2354) 2016-08-04 03:19:36 -07:00
karthic rao
2e0742e309 bucket policy: Support for '?' wildcard. (#2353)
- Support for '?' wildcard for resource matching.

- Wildcard package is added with Match functions.

- Wildcard.Match supports '*' and wild.MatchExtended supports both '*'
  and '?' wildcards in the pattern string.

- Tests for the same for the wide range of cases.
2016-08-04 00:41:32 -07:00
Anis Elleuch
cc0d5b6fe0 webapi: ServerInfo returns the list variables/values in the environnment of the Minio server (#2351) 2016-08-03 13:47:03 -07:00
Harshavardhana
2db51e9d61 server/config: config should be updated only when we edit the credentials. (#2345) 2016-08-02 16:48:21 -07:00
Krishna Srinivas
69fd196471 Object-cache: enforce cache size to be less than RAM. (#2338) 2016-08-02 10:04:35 -07:00
karthic rao
1494af485e tests: signature-utils test (#2342) 2016-08-01 20:54:11 -07:00
Harshavardhana
c1d70f1f9e server/config: Create 'certs' directory in initServerConfig(). (#2331)
certs directory was created only if config was not present, our
expectancy is we need 'certs' directory to be present all the
time making it easier to be documented.
2016-07-30 14:55:20 -07:00
karthic rao
9baf599c91 tests: Unit tests and fixes for copyBuffer. (#2333)
- Unit tests for copyBuffer.
- Shadowing fix for copyBuffer.
2016-07-30 13:36:43 -07:00
Harshavardhana
8d090a20ce server: set globalCacheSize honoring system limits for max memory. (#2332)
On unix systems it is possible to set max memory used by
running processes using 'ulimit -m' or 'syscall.RLIMIT_AS'.

A process whence exceeds this limit, kernel would pro-actively
kill such a server with OOM. To avoid this problem of defaulting
our cache size to 8GB we should look for if the current system
limits are lower and set the cache size appropriately.
2016-07-30 08:50:49 -07:00
karthic rao
5b86dd7659 Tests: Cleanup/Enhancement: Add few more cases to posix.ReadFile tests and use a cleaner posixTestSetup for posix tests (#2330) 2016-07-30 01:26:19 -07:00
Jesse Lucas
4b05b6a6c1 Refactoring checkPortAvailability to check each tcp network (tcp, tcp4, tcp6) if a port is taken. (#2325) 2016-07-29 18:11:00 -07:00
Jesse Lucas
851d05161a Adding return error value to checkPortAvailability to enable testing of function. Adding checkport_test.go to test checkPortAvailability. Updated server-main.go to use error value from checkPortAvailability and calls fatalIf if an error is returned. (#2322) 2016-07-29 14:05:31 -07:00
Harshavardhana
cf9ba7b88f tests: Add missing unit test for posix.ReadFile. (#2319) 2016-07-28 21:57:11 -07:00
Krishnan Parthasarathi
50dae0ab04 bucket-policy: Migrate bucket policy to minioMetaBuket/buckets (#2321) 2016-07-28 20:49:08 -07:00
Anis Elleuch
14cefd352c Heal corrupted formats of disks already containing objects (#2297) 2016-07-28 16:49:58 -07:00
Frank
f239fcac67 Switched to faster minio/sha256-simd implementation (#2320) 2016-07-28 14:44:37 -07:00
Anis Elleuch
dcc3463e48 Limit POST form fields and file size + Generic Request Size limiter (#2317)
* Use less memory when receiving a file via multipart
* Add generic http request maximum size limiter to secure against malicious clients
2016-07-28 12:02:22 -07:00
Krishna Srinivas
7850d17f48 web-browser: disable minio browser when environmental variable MINIO_BROWSER=off (#2315)
fixes #2314
2016-07-28 04:00:33 -07:00
Harshavardhana
f503ac3db8 XL/Erasure: Make bit-rot verification based on xl.json algo. (#2299)
Currently `xl.json` saves algorithm information for bit-rot
verification. Since the bit-rot algo's can change in the
future make sure the erasureReadFile doesn't default to
a particular algo. Instead use the checkSumInfo.
2016-07-28 02:20:34 -07:00
Harshavardhana
65f71ce0c5 browser: Object upload should save metadata and notify. (#2309)
Object upload from browser should save additional
incoming metadata. Additionally should also notify
through bucket notifications once they are set.

Fixes #2292
2016-07-27 21:11:15 -07:00
Harshavardhana
ad19bf0ec1 server: Add update referral for update notification URL. (#2308) 2016-07-27 19:59:19 -07:00
Harshavardhana
f0067babe0 handlers: Add 'crossdomain.xml' handler. (#2305)
Fixes #2301
2016-07-27 19:53:55 -07:00
karthic rao
6b5fce826b placing defer file.Close() right after opening it (#2306) 2016-07-27 19:22:32 -07:00
Anis Elleuch
8b3cb3a0de POST Policy, multiple fixes: AccessDenied with unmet conditions, ${filename} in Key, missing filename in multipart (#2304)
* Unsatisfied conditions will return AccessDenied instead of MissingFields

* Require form-field `file` in POST policy and make `filename` an optional attribute

* S3 feature: Replace  in Key by filename attribute passed in multipart
2016-07-27 17:51:55 -07:00
Harshavardhana
2f7358a8a6 XL: erasure Index should have its corresponding distribution order. (#2300) 2016-07-27 11:57:08 -07:00
Harshavardhana
77248bd6e8 api: Notify events only if bucket notifications are set. (#2293)
While the existing code worked, it went to an entire cycle
of constructing event structure and end up not sending it.

Avoid this in the first place, but returning quickly if
notifications are not set on the bucket.
2016-07-26 19:10:02 -07:00
Harshavardhana
3054b74260 docs: Fix startup message for server as well. 2016-07-26 15:54:11 -07:00
koolhead17
7d42d09da8 Doc: Replaced README & FreeBSD docs with updated minio server splash (#2298)
screen.
2016-07-26 15:46:41 -07:00
Anis Elleuch
95ddf061ab Rate limit is working and supports limited waiting clients (#2295) 2016-07-26 14:17:11 -07:00
karthic rao
5fe72cf205 Removing readAllMeta from xl-v1-healing.go and placing it in xl-v1-utils.go (#2296) 2016-07-26 11:34:48 -07:00
Harshavardhana
1e3d80552f XL: format.json healing should cater for mismatching order. (#2285)
Fresh disks can be provided in any order, we need to make sure
to preserve existing disk order and populate the fresh disks
in new positions.

Thanks for Anis Elleuch <vadmeste@gmail.com> for finding this issue.
2016-07-26 03:18:47 -07:00
Harshavardhana
f253dfc922 docs: Fix erasure code image embedding issue. 2016-07-26 00:09:57 -07:00
Harshavardhana
1f9e38e3cd api: Add bucket notification util tests. (#2289) 2016-07-26 00:01:35 -07:00
karthic rao
530ed67b59 Adding leak test framework (#2254) 2016-07-25 20:39:14 -07:00
Harshavardhana
a2b6f0524d XL/erasure: Remove deprecated copyN function. (#2288) 2016-07-25 20:36:56 -07:00
karthic rao
091d80666a Enhancement for Erasure encode test. (#2287) 2016-07-25 20:36:41 -07:00
Harshavardhana
efbf7dbc0f api: Bucket notification add filter rules check and validate. (#2272)
These filtering techniques are used to validate
object names for their prefix and suffix.
2016-07-25 17:53:55 -07:00
Krishna Srinivas
043ddbd834 optimize memory allocation during erasure-read by using temporary buffer pool. (#2259)
* XL/erasure-read: optimize memory allocation during erasure-read by using temporary buffer pool.

With the change the buffer needed during GetObject by erasureReadFile is allocated only once.
2016-07-25 14:17:01 -07:00
Dee Koder
04f90bd463 doc: Broken links fixed in the Explore further section. (#2281) 2016-07-24 22:53:35 -07:00
Harshavardhana
9212e11b90 XL/GetObject: When disk is not available, checksum should be empty. (#2276) 2016-07-24 22:49:27 -07:00
Harshavardhana
79bab6b561 XL: Operations on uploads.json should cater for disk being unavailable. (#2277) 2016-07-24 18:08:15 -07:00
Krishnan Parthasarathi
7e5a78985d tests: Using listObjects clean up remaining tree walk go routines. (#2278)
* fs: Set nextMarker independent of it having a slash or not.
* tests: Using listObjects clean up remaining tree walk go routines.
* tests: Use slices to hold data instead of enumerating test cases by hand

... also fixed numbering of test cases.
2016-07-24 15:52:12 -07:00
Anis Elleuch
b0b919a1d6 Server http and https on the same port using a customized server (#2247) 2016-07-24 12:30:57 -07:00
Harshavardhana
6c2fb19ed7 docs: Removed and purged uneeded docs. (#2273) 2016-07-24 03:32:45 -07:00
Harshavardhana
f248089523 api: Implement bucket notification. (#2271)
* Implement basic S3 notifications through queues

Supports multiple queues and three basic queue types:

1. NilQueue -- messages don't get sent anywhere
2. LogQueue -- messages get logged
3. AmqpQueue -- messages are sent to an AMQP queue

* api: Implement bucket notification.

Supports two different queue types

- AMQP
- ElasticSearch.

* Add support for redis
2016-07-23 22:51:12 -07:00
Harshavardhana
f85d94288d api: extract http headers with some supported header list. (#2268) 2016-07-22 20:31:45 -07:00
Harshavardhana
55cb55675c api/multipart: Send S3 compatible error message, missing second sentence. (#2270) 2016-07-22 17:05:40 -07:00
koolhead17
7e076577de Update Minio-erasure-code-quickStart-guide.md (#2269)
Minor update to link the URL.
2016-07-22 13:35:27 -07:00
Harshavardhana
5d118141cd XL: Remove deadcode unionChecksumInfo. (#2261) 2016-07-21 19:07:00 -07:00
karthic rao
646ff2c64d Get Object disk not found test (#2264)
Test: GetObject disk not found test
2016-07-21 19:06:50 -07:00
Harshavardhana
0add96f655 fs: Save metadata for objects in minioMetaBucket directory. (#2251) 2016-07-21 17:31:14 -07:00
Krishna Srinivas
303f216150 tests: xl-v1-metadata.go, xl-v1-multipart-common.go - remove unused methods, add enhance tests to improve code coverage. (#2260) 2016-07-21 15:00:11 -07:00
koolhead17
a7b5b8e63f Doc: Modified the contents for Doctor. (#2262) 2016-07-21 14:58:16 -07:00
Krishnan Parthasarathi
5730d40478 tests: Added GetObject, DeleteObject and PutObject unit-tests (#2222) 2016-07-21 13:15:54 -07:00
karthic rao
0eaf684777 Remove consuming benchmarks, clean up closures, correct Get and PutObject Parallel benchmarks (#2258) 2016-07-21 11:17:28 -07:00
Harshavardhana
a0635dcdd9 XL: Do not rely on getLoadBalancedQuorumDisks for NS consistency. (#2243)
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.

The problem comes from given a set of N disks returning just a
shuffled N/2 disks.  In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.

This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.

Fixes #2242
2016-07-21 00:27:08 -07:00
Dee Koder
41f4f2806d screenshots: update with the latest optimized image. (#2249) 2016-07-20 16:15:26 -07:00
Dee Koder
2a972ef1fd images: Move screenshot for docs inside docs/screenshots directory. (#2248)
* images: Move screenshot for docs inside docs/screenshots directory. Use optimized images.

* images: This fix optimizes the images for the Erasure Code Quick Start Guide
2016-07-20 13:52:30 -07:00
Harshavardhana
c1e953b368 api: Set content-encoding properly if set. (#2245)
Additionally don't set content-type if not present, golang http
server automaticaly handles this and sets it automatically.
2016-07-20 12:40:20 -07:00
Krishna Srinivas
18728a0b59 XL/erasure-read: refactor erasure read and add tests (#2232) 2016-07-20 01:30:30 -07:00
Harshavardhana
cef26fd6ea XL: Refactor usage of reduceErrs and consistent behavior. (#2240)
This refactor is also needed in lieu of our quorum
requirement change for the newly understood logic behind
klauspost/reedsolom implementation.
2016-07-19 19:24:32 -07:00
Dee Koder
f67c930731 doc: Fixed spacing with respect to code blocks. (#2241) 2016-07-19 19:08:43 -07:00
GarimaKapoor
3589a58179 Update Minio-erasure-code-quickStart-guide.md 2016-07-19 17:59:05 -07:00
Dee Koder
e8155abc18 screenshot: Use the full path to the screenshot when embedding images (#2239) 2016-07-19 17:48:18 -07:00
Dee Koder
2e8360120d headings: We need to add a consistent heading for all docs. Adding Minio FreeBSD QuickStart Guide in the title. (#2233) 2016-07-19 14:56:34 -07:00
Dee Koder
02b191222c headings: Added standardized heading for this document. (#2234) 2016-07-19 14:56:20 -07:00
Dee Koder
b699795901 docs: Remove additional headings. Added standard heading. Include numbering. (#2235) 2016-07-19 14:30:32 -07:00
Harshavardhana
86d31e99d5 api: use checkAuth now at PutBucket, DeleteBucket handlers. (#2225)
Additionally add a unit test for isReqAuthenticated function.
2016-07-18 23:56:27 -07:00
Krishna Srinivas
897d78d113 erasureReadFile and erasureCreateFile testcases. (#2229)
* unit-tests: Unit tests for erasureCreateFile and erasureReadFile.

* appendFile() should return errXLWriteQuorum.

* TestErasureReadFileOffsetLength() tests erasureReadFile() for different offset and lengths.

* Fix for the failure seen in the erasure read unit test case. Issue #2227

* Move common erasure setup code to newErasureTestSetup()

* Review fixes. Add few more test cases for erasureReadFile.
2016-07-18 23:56:16 -07:00
Harshavardhana
1f706e067d api: xmlDecoder should honor contentLength. (#2226)
This is needed so that we avoid reading large amounts
of data from compromised clients.
2016-07-18 21:20:17 -07:00
Krishnan Parthasarathi
5cc9e4e214 fs/XL: Return IncompleteBody{} error for short writes (#2228) 2016-07-18 19:06:48 -07:00
Krishna Srinivas
27a5b61f40 tree-walk: optimize tree walk such that leaf detection of entries is delayed till the entry is sent on the treeWalkResult channel. (#2220) 2016-07-17 15:16:52 -07:00
Harshavardhana
aeac902747 API: ListBuckets doesn't have a body, we should never read the body. (#2218)
ListBuckets was incorrectly reading the body of the request, fix it.
2016-07-17 13:23:15 -07:00
Harshavardhana
aaf7803831 api: Requests should be differentiated if possible based on http router. (#2219)
In current master ListObjectsV2 was merged into ListObjectsHandler
which also implements V1 API as well.

Move the detection of ListObject types to its rightful place
in http router.
2016-07-17 12:32:05 -07:00
Krishna Srinivas
8cc163e51a Refactor xl.GetObject and erasureReadFile. (#2211)
* XL: Refactor xl.GetObject and erasureReadFile. erasureReadFile() responsible for just erasure coding, it takes ordered disks and checkSum slice.

* move getOrderedPartsMetadata and getOrderedDisks to xl-v1-utils.go

* Review fixes.
2016-07-16 08:35:30 -07:00
Harshavardhana
2d38046a5a utils: BucketNames with double periods and ip address should be rejected. (#2213)
Fixes #2212
2016-07-15 17:30:37 -07:00
Harshavardhana
cbb6b48b94 doc: update README.md 2016-07-15 16:12:54 -07:00
Harshavardhana
d0636d633d doc: Move FreeBSD.md to docs. 2016-07-15 16:09:01 -07:00
Harshavardhana
41187fc2ef docker: Fix docker edge build 2016-07-15 15:10:38 -07:00
koolhead17
204ec2c6c0 doc:README.md/Updated to sync with docs.minio.io (#2210)
* doc:README.md/Updated to sync with docs.minio.io

* doc:README.me/Modified the minio server output terminal to reflect new release changes.

* docs:README.md/Modified and changed location of other markdown files.
2016-07-15 15:03:59 -07:00
Krishnan Parthasarathi
3bce5db6d1 tests: Add tests to treeWalk for sortedness, recursive listing and isEnd behaviour (#2209) 2016-07-14 18:37:43 -07:00
Harshavardhana
35d438e0ff vendorize: update all vendorized packages. (#2206)
Bring in new changes from upstream for all the packages.

Important ones include
   - gorilla/mux
   - logrus
   - jwt
2016-07-14 14:59:20 -07:00
Krishna Srinivas
b090c7112e Refactor of xl.PutObjectPart and erasureCreateFile. (#2193)
* XL: Refactor of xl.PutObjectPart and erasureCreateFile.

* GetCheckSum and AddCheckSum methods for xlMetaV1

* Simple unit test case for erasureCreateFile()
2016-07-14 14:59:01 -07:00
Harshavardhana
af6109f89a update: Remove extraneous '/' in update message. (#2207) 2016-07-14 14:08:16 -07:00
Anis Elleuch
3f27734c22 Use normal color instead of forced white for users who have bright terminal background (#2200) 2016-07-13 14:27:36 -07:00
Harshavardhana
cdf1373f8e XL: Ignore and continue for cases when bucket does not exist. (#2205)
Fixes #2201
Fixes #2204
2016-07-13 13:44:33 -07:00
Krishnan Parthasarathi
45240f158d xl: Make namespace locking granular for PutObject (#2199) 2016-07-13 11:56:25 -07:00
Harshavardhana
0bd6b67ca5 server: Sort ips based on their last octet value. (#2198) 2016-07-13 06:34:59 -07:00
Harshavardhana
8c84df5e74 server: Change color codes for headings and sub-headings. (#2197)
This patch changes the color coding used for headings, sub-headings
and values as finalized.
2016-07-13 00:56:00 -07:00
Harshavardhana
dc3bafb194 XL: isQuorum rename as isDiskQuorum, word it properly. (#2196) 2016-07-13 00:29:48 -07:00
Harshavardhana
3b69b4ada4 server: Change server startup message. (#2195)
This change brings in the new agreed startup message
for the server.

Adds additional links point to Minio SDKs as well.
2016-07-12 23:21:18 -07:00
Krishnan Parthasarathi
0610527868 XL: PutObjectPart update checksum, re-read from xl.json for the part being written. (#2191) 2016-07-12 18:23:40 -07:00
Harshavardhana
0fcfb5df3b XL/fs: Change minioMetaBucket different than '.minio' config dir. (#2190)
This fixes corruption of config directory seen when minio server
exports 'home' directory.

```
minio server ~
```
2016-07-12 15:21:29 -07:00
Harshavardhana
623e0f9243 XL: listOnlineDisks should use modTime instead of version. (#2166)
This change is needed to make reading from objects future proof
in-terms of handling online disks. Our current counter is not
based on affirmative knowledge and relies on arithmetic sequence
which can lead to bugs.

Using modTime simplifies the understanding of `xl.json` and future
tooling / debugging of the format.
2016-07-12 15:20:31 -07:00
utsl42
e5cd35aad0 XL: GetObjectInfo() store and retrieve user-defined object metadata. (#2189) 2016-07-12 12:45:17 -07:00
Anis Elleuch
5cd629adca XL/fs: DeleteVol should not return error cleaning multipart dir for errVolumeNotFound (#2188) 2016-07-12 10:07:32 -07:00
Anis Elleuch
0fddf3fe17 Avoid creating tmp directories under .minio/tmp/ to facilitate cleaning (#2187) 2016-07-12 09:38:45 -07:00
karthic rao
ac6ff67546 Tool for running benchmark comparison of 2 commits (#2161) 2016-07-12 02:08:38 -07:00
Harshavardhana
126865e8df XL/bucket: Remove bucket should cleanup incomplete uploads as well. (#2173)
This behavior is in accordance with S3.

Fixes #2170
2016-07-12 01:01:47 -07:00
Krishnan Parthasarathi
1c82b81408 Rename parts/objects only on onlineDisks (#2185) 2016-07-11 22:53:54 -07:00
Bala FA
749a94f6c9 tests: Add tests for signature-jwt code (#2169)
Fixes #1989
2016-07-11 21:57:40 -07:00
Harshavardhana
e9647b5f12 API/CopyObject: Refactor the code and handle if-modified-since as well. (#2183)
This completes the S3 spec behavior for CopyObject API as reported
by `s3verify`.
2016-07-11 19:24:34 -07:00
Krishnan Parthasarathi
bef72f26db xl: Make locking more granular for PutObjectPart requests (#2168) 2016-07-11 17:24:49 -07:00
Harshavardhana
ede4dd0f9c server: update command should check for 3s from 1ms. (#2175)
Programmer error :-)
2016-07-11 16:22:10 -07:00
Bala FA
bfc59b7d50 tests: improve unit tests for xl-v1-metadata. (#2172)
Fixes #2124
2016-07-11 11:42:01 -07:00
Remco Verhoef
a162198623 implemented systemd script (#2167) 2016-07-11 03:55:40 -07:00
Harshavardhana
de468f92ec posix: ReadAll should handle the case when parent is not a dir. (#2163)
It can happen so that a read request can come for a file which
already has a parent i.e a file.

This fix handles this scenario - fixes #2047
2016-07-11 00:15:37 -07:00
Harshavardhana
d676e660c9 API/CopyObject: If-None-Match should return Precondition failed. (#2164) 2016-07-10 17:32:59 -07:00
Krishna Srinivas
aa7079fc7b XL/GetObject: If quorum not available during GetObject appropriate error should be returned. (#2135) 2016-07-10 17:12:22 -07:00
Harshavardhana
bdff0848ed server: Implement --ignore-disks for ignoring disks from healing. (#2158)
By default server heals/creates missing directories and re-populates
`format.json`, in some scenarios when disk is down for maintainenance
it would be beneficial for users to ignore such disks rather than
mistakenly using `root` partition.

Fixes #2128
2016-07-10 14:38:15 -07:00
Bala FA
0793237d94 tests: Move signature calculation in separate function. (#2160)
Previously newTestRequest() creates request object and returns
signature v4 signed request.  In TestCopyObject(), its required to add
headers later to the request and sign the request.

This patch introduces two new functions
* signRequest(): signs request using given access/secret keys.
* newTestSignedRequest(): returns new request object signed with given
  access/secret keys.

Fixes #2097
2016-07-10 11:10:59 -07:00
Bala FA
2a95eabb8a benchmarks: add parallel benchmarks for PutObject/GetObject. (#2159)
Fixes #2092
2016-07-10 11:08:45 -07:00
Krishnan Parthasarathi
bc8720406d Added specific error for InvalidObjectName (#2157) 2016-07-09 17:11:08 -07:00
Krishna Srinivas
ae80f8ca35 ObjectLayer/GetObject: Should return the right error value. Fix done in FS and XL. (#2133)
fixes #2117
2016-07-09 13:01:32 -07:00
Harshavardhana
5102a5877e API/handler: CopyObject make it behave in accordance with S3 spec. (#2155)
Fixes bugs found while running s3verify tool - fixes #2152
2016-07-09 12:13:40 -07:00
karthic rao
3341fe9b28 organizing the benchmarks in the right test files (#2154) 2016-07-09 00:45:49 -07:00
Harshavardhana
c0c8a8430e XL/PutObject: Add single putObject and multipart caching. (#2115)
- Additionally adds test cases as well for object cache.
- Adds auto-expiry with expiration and cleanup time interval.

Fixes #2080
Fixes #2091
2016-07-08 20:34:27 -07:00
karthic rao
b0c180b77c Test for ObjectLayer.GetObject() (#2153) 2016-07-08 18:26:04 -07:00
karthic rao
778b870b77 placing the http range error in objct-api-errors. (#2150) 2016-07-08 17:22:55 -07:00
Harshavardhana
cb415ef12e Merge pull request #2149 from harshavardhana/hash-order
XL/metadata: use new hashOrder algorithm for newXLMeta. (#2147)
2016-07-08 15:57:16 -07:00
Harshavardhana
6266328a85 XL/metadata: use new hashOrder algorithm for newXLMeta. (#2147) 2016-07-08 15:39:21 -07:00
frankw
63b3f1dcfd Use new algorithm to get fixed random order of disks (#2147) 2016-07-08 15:38:47 -07:00
Anis Elleuch
5ff1203fc0 Add PutObjectPart benchmark (#2145) 2016-07-08 14:28:06 -07:00
Harshavardhana
c3ab8bbd51 Turning off OSX builds for now. 2016-07-08 11:22:25 -07:00
Harshavardhana
a4a55bf134 tests: Fix erasure-readfile test formatting. 2016-07-08 11:05:30 -07:00
Harshavardhana
ec35330ebb XL/fs: GetObject should validate all its inputs. (#2142)
Fixes #2141
Fixes #2139
2016-07-08 07:46:49 -07:00
Harshavardhana
ca1b1921c4 XL: Implement ignore errors. (#2136)
Each metadata ops have a list of errors which can be
ignored, this is essentially needed when

  - disks are not found
  - disks are found but cannot be accessed (permission denied)
  - disks are there but fresh disks were added

This is needed since we don't have healing code in place where
it would have healed the fresh disks added.

Fixes #2072
2016-07-07 22:10:27 -07:00
Harshavardhana
4c21d6d09d tests: Remove racey failedDisks behavior in RenameObject tests. (#2138)
Additionally also initialize namespace lock only once per test
run, there is no reason to initialize it multiple times to avoid
races.

Fixes #2137
2016-07-07 19:50:44 -07:00
Bala FA
5ec7734d88 FS: Check offset is within object size in GetObject() (#2123)
Fixes #2118
2016-07-07 19:49:45 -07:00
karthic rao
2c837128ef Object layer tests revamp for individual execution (#2134) 2016-07-07 15:05:51 -07:00
Bala FA
282044d454 http: handle possible range requests properly. (#2122)
Previously range string is not validated against various combination
of values.  This patch fixes the issue.

Fixes #2098
2016-07-07 15:05:18 -07:00
Harshavardhana
8e53064bb4 XL/format: Initialize missing meta volume. (#2131)
Fixes #2127
2016-07-07 15:01:54 -07:00
Krishna Srinivas
f55093cdd6 multipart: During multipart list the listing go-routine should be saved to the List-pool. (#2130) 2016-07-07 09:06:35 -07:00
Harshavardhana
ddf3245677 xl/fs: offset and length cannot be negative. (#2121)
Fixes #2119
2016-07-07 01:30:34 -07:00
Harshavardhana
169c72cdab vendor: Bring new updates from blake2b-simd repo. (#2094)
This vendorization is needed to bring in new improvements
and support for AVX2 and SSE.

Fixes #2081
2016-07-06 18:24:31 -07:00
Nick Craig-Wood
8c767218a4 URL Encode X-Amz-Copy-Source as per the spec (#2114)
The documents for COPY state that the X-Amz-Copy-Source must be URL encoded.

http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
2016-07-06 15:42:17 -07:00
Bala FA
a51bb1d728 http: handle request range like Amazon S3. (#2112)
Fixes #2098
2016-07-06 12:50:24 -07:00
Krishna Srinivas
26b7d79a85 XL/ObjectCache: DeleteObject() should delete the object from the object cache. (#2113)
fixes #2103
2016-07-06 10:25:42 -07:00
Krishna Srinivas
01cbacd803 object-cache: use golang bytes.Buffer and bytes.NewReader instead of custom implementation. (#2108) 2016-07-06 01:29:49 -07:00
Harshavardhana
7bde27032d signv4: Validate preSigned payload properly. (#2106)
We need to only validate presigned payload only
if the payload is requested for, with default payload
i.e 'UNSIGNED-PAYLOAD' we don't need to validate.

Fixes #2105
2016-07-05 21:00:20 -07:00
Bala FA
44ae7a037b fix: allocate buffer to required size than readSizeV1 (#2095)
Refer #2077
2016-07-05 20:59:54 -07:00
karthic rao
a35341448f XL: Cache: Purging partially cached content upon erasureReadFile failure (#2104) 2016-07-05 12:48:54 -07:00
Krishna Srinivas
d6dfcd0ba7 unit-tests: Unit tests for functions in erasure-readfile.go (#2090) 2016-07-05 11:41:25 -07:00
Harshavardhana
8ddf52021a server: Bring in s3 compatibility fixes. (#2099)
This patch fixes majority of discrepant messages and responses
previously reported.

There are few discrepancies observed

- S3 is not honoring 'If-Modified-Since' header.
- We do not implement object policy, S3 returns a different response in this category.
- Adding new headers causes signature mismatch, but Minio server is fine for example
  TestCopyObject() to be fixed by moving the signature logic out.
  Relevant bug - https://github.com/minio/minio/issues/2097

Fixes #1955
2016-07-05 01:06:30 -07:00
Harshavardhana
8a028a9efb handler/PUT: Handle signature verification through a custom reader. (#2066)
Change brings in a new signVerifyReader which provides a io.Reader
compatible reader, additionally implements Verify() function.

Verify() function validates the signature present in the incoming
request. This approach is choosen to avoid complexities involved
in using io.Pipe().

Thanks to Krishna for his inputs on this.

Fixes #2058
Fixes #2054
Fixes #2087
2016-07-05 01:04:50 -07:00
Bala FA
0540863663 fix: use readSizeV1 wherever applicable. (#2093) 2016-07-04 19:21:15 -07:00
Harshavardhana
4cfbdb1bf0 server: Remove deadcode/deprecated code. (#2088) 2016-07-04 14:46:38 -07:00
Krishna Srinivas
1ec86dac2c server-tests: unify XL and FS tests into common code. server_test.go contains common test code. server_xl_test.go contains XL tests specific to XL. (#2089) 2016-07-04 13:18:41 -07:00
Krishna Srinivas
7a8b8cd0a1 tree-walk: unify FS and XL tree-walk with functional approach. (#2027) 2016-07-04 01:49:27 -07:00
karthic rao
a8a3e95835 Api/Bucket-Policy: Handler tests (#2074) 2016-07-03 22:35:30 -07:00
Bala FA
1ad5fb8f76 posix: checkDiskFree() also checks free inodes. (#2086)
Previously checkDiskFree() checks for free available space.  This
patch enables checkDiskFree() also checks for free inodes in linux and
free clusters in windows.

Fixes #2075
2016-07-03 22:34:45 -07:00
Bala FA
52b55afce0 FS: check whether disk format is FS or not. (#2083)
Fixes #2060
2016-07-03 20:01:40 -07:00
Harshavardhana
355f06cfea tests: Add urlEncode tests. (#2078) 2016-07-03 19:25:04 -07:00
Harshavardhana
d2557bb538 XL: GetObject caching implemented for XL. (#2017)
The object cache implementation is XL cache, which defaults
to 8GB worth of read cache. Currently GetObject() transparently
writes to this cache upon first client read and then subsequently
serves reads from the same cache.

Currently expiration is not implemented.
2016-07-03 17:15:38 -07:00
Bala FA
8d4365d23c tests: add unit test for posix functions. (#2037)
Unit tests for posix operations.

* MakeVol
* DeleteVol
* StatVol
* ListVols
* DeleteFile
* AppendFile
* RenameFile
* StatFile

Fixes #2021
2016-07-03 11:17:08 -07:00
karthic rao
55ae7cac42 api/bucket-policy: Refactor, Handler test. (#2071)
PR contains,
- New setup utilities for running object handler tests. Here is why they are essential, 
    - Unit tests have to be run in isolation without being have to be associated with other functionalities which are not under test. 
    - The integration tests follows the philosophy of running a Test Server and registers all handlers and fires HTTP requests over the socket to simulate the system functionality under usual work load scenarios and test for correctness. But this philosophy cannot be adopted for running unit tests for HTTP handlers. 
    - Running Unit tests for API handlers, 
        - Shouldn't run a test server. Should purely call the handlers `ServeHTTP` under isolation. 
        - Shouldn't register all handlers, should only register handlers under test and so that the system is close to be in an isolated setup. 

- As an example PutBucketPolicy test is illustrated using the new setup. Exhaustive cases has to be added and has been listen in TODO for now.
2016-07-02 19:05:16 -07:00
Harshavardhana
48ac34919f browser: Add new release for ui-assets.go (#2070)
update `ui-assets.go` using `x-amz-date` for JSON rpc.
2016-07-02 10:54:17 -07:00
Harshavardhana
d64c3fd464 posix: Return errDiskNotWritable during disk initialization. (#2048)
It can happen that minio server might not have
writable permissions on the export paths command line.

Fixes #2035
2016-07-02 01:59:28 -07:00
Harshavardhana
e5dd917c37 handlers/generic: Remove support for 'x-minio-date' (#2064) 2016-07-02 01:51:17 -07:00
Krishna Srinivas
eb5f782c74 object-handler: skip sha256 calculation if x-amz-content-sha256=="UNSIGNED-PAYLOAD" (#2038)
fixes #2024 #2056
2016-07-01 14:34:40 -07:00
Harshavardhana
734e779b19 XL/erasureCreate: Create a limit reader if size is specified. (#2059)
This is needed so that we only write data which was requested
for, using a limit reader avoids spurious reads on the incoming
client data. Additionally using limit reader provides server
safety from rogue clients sending copious amounts of data (for
example a denial of service attack).

This patch also caters for size == -1 when content encoding from
a client is set as chunked, we happily read till io.EOF
2016-07-01 14:33:28 -07:00
Krishna Srinivas
3f2b4d9dc2 Show "https" in the "minio server export/" output if certificates are available. (#2065)
fixes #2036
2016-07-01 14:32:53 -07:00
Bala FA
cd1c2db864 posix-utils: fix path handling in windows. (#2053) 2016-07-01 11:44:07 -07:00
karthic rao
48aa5f2199 api/bucket-policy: Add unit tests for more coverage, fixes couple of bugs. (#2055)
Changes to ResourceMatch logic.
Test for action match function.
2016-06-30 23:49:59 -07:00
Krishnan Parthasarathi
bcb822c390 Send XML header before the first of whitespace chars (#2046)
* Sent XML header before the first of whitespace chars

XML parsing fails in aws cli due to unexpected whitespace character. To
fix this, we send the xml header before we send the first whitespace
character, if any.

* Fix race between sendWhiteSpaceChars and completeMultiUploadpart
2016-06-30 18:48:50 -07:00
Krishnan Parthasarathi
285a94d2c0 Ignored errors in cleanup in commitXLMetadata (#2044)
Deletion of tmp files where xl metadata was saved before the commit
operation doesn't change the error returned to the caller. So, it is to
be ignored.
2016-06-30 16:37:51 -07:00
Krishnan Parthasarathi
64899e5197 xl: Used unique tmp file to update xl.json in putObjectPart (#2043)
An in-place update to xl.json amidst concurrent PutObjectPart operations
lead to racy updates to xl.json making it un-parseable. To avoid this,
we create a unique tmp file where updates to xl.json are staged before
renaming it to the final location.
2016-06-30 16:28:01 -07:00
Danilo Pereira
812554087f getCertsPath should use getConfigPath instead of defaulting to users homedir. (#2039)
Fixes #2028
2016-06-30 15:49:18 -07:00
Bala FA
57bc08cc7e posix: remove disk free space check for read-only and delete methods. (#2033) 2016-06-29 11:25:35 -07:00
Harshavardhana
0e3907072c XL/fs: Initialize export paths supplied on command line (#2020)
Fixes #2013
2016-06-29 03:13:44 -07:00
karthic rao
8e8f6f90a4 adding detailed comments to server_test (#2032) 2016-06-29 02:30:36 -07:00
Bala FA
4c1a11aae6 XL: allow meta bucket name appended with tmp meta prefix. (#2007) 2016-06-29 02:28:46 -07:00
Harshavardhana
ae936a0147 XL: Relax write quorum further to N/2 + 1. (#2018)
This changes behavior in some parts of the code
as well address it.

Fixes #2016
2016-06-29 02:10:40 -07:00
Harshavardhana
d484157d67 XL/bitrot: Migrate to new blake2b-simd SIMD optimized implementation. (#2031)
Thanks for Frank Wessels <fwessels@xs4all.nl> for all the heavy lifting work.

Comparative benchmarks are as below.
```
benchmark                old ns/op     new ns/op     delta
BenchmarkHash64-4        742           411           -44.61%
BenchmarkHash128-4       681           346           -49.19%
BenchmarkWrite1K-4       4239          1497          -64.69%
BenchmarkWrite8K-4       33633         11514         -65.77%
BenchmarkWrite32K-4      134091        45947         -65.73%
BenchmarkWrite128K-4     537976        183643        -65.86%

benchmark                old MB/s     new MB/s     speedup
BenchmarkHash64-4        86.18        155.51       1.80x
BenchmarkHash128-4       187.96       369.10       1.96x
BenchmarkWrite1K-4       241.55       683.87       2.83x
BenchmarkWrite8K-4       3897.06      11383.41     2.92x
BenchmarkWrite32K-4      977.48       2852.63      2.92x
BenchmarkWrite128K-4     243.64       713.73       2.93x
```

Fixes #2030
2016-06-29 02:06:35 -07:00
Harshavardhana
796fe165c7 server: Minor command line doc change for XL. (#2022) 2016-06-29 02:05:57 -07:00
Harshavardhana
3ac39ff107 XL: Change minimum disks supported to 6 now. (#2023)
This change co-incides with another patch set which
reduces the writeQuorum requirement. With the
write quorum change it is now possible to support
6 disk configuration.
2016-06-29 02:05:29 -07:00
Krishnan Parthasarathi
b6b9e88e47 Added unit-tests for treeWalkPool (#1969)
* Added unit-tests for treeWalkPool
* Added unit tests for tree-walk-fs
* Added period at the end of all comments.
* FS/XL: Unified tree walk tests for both backends
* Added disk failure related tests for treewalk

Replaced removeRandomDisks with removeDiskN. There is no need to
randomize disks that fail while the distribution of chunks in XL during
erasure coding data is random.
2016-06-28 22:32:00 -07:00
karthic rao
59366d8f4c Benchmarks for ObjectLayer.PutObject() (#2029) 2016-06-28 22:12:36 -07:00
Harshavardhana
748dc80047 API: add writePartTooSmallErrorResponse to extend standard error responses. (#2005)
This function is added to extend the standard error responses.
Which is needed in some cases for example CompleteMultipartUpload
should respond with ErrPartTooSmall error when parts uploaded are
lesser than 5MB (i.e minimum allowed size per part).

Fixes #1536
2016-06-28 14:51:49 -07:00
karthic rao
6dcfa7b046 Fix for tests leaving out temp directories (#2025) 2016-06-28 04:21:52 -07:00
Krishnan Parthasarathi
a854e8cc5c api: Sent ErrPreconditionFailed on If-Match failure (#2009)
* api: Sent ErrPreconditionFailed on If-Match failure

ref:
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList

* tests: Added functional tests for GetObject w/ If-Match headers set

* tests: Used verifyError to simplify errorCode and description matching on error
2016-06-28 01:18:18 -07:00
karthic rao
76f6533f8d Adding detailed comments for server_xl_test. (#2011) 2016-06-27 23:54:56 -07:00
Harshavardhana
4db2b03312 XL: Rename objectN to part.N (#2019)
Fixes #2015
2016-06-27 21:42:33 -07:00
Krishna Srinivas
5291db60c6 XL/erasure: refactor erasureReadFile. Move parallelRead into a separate function. (#2008) 2016-06-27 13:24:55 -07:00
Harshavardhana
2e1f66c37d XL: Handle quorum situations properly for write operations. (#1986)
Adds two test cases one for

 - PutObject when write quorum is not available.
 - PutObjectPart when write quorum is not available.

Fixes #1951
2016-06-27 10:01:09 -07:00
Bala FA
c88720ea2c XL/listObjects: Ignore entry if getObjectInfo() returns errFileNotFound (#2004)
Fixes #1956
2016-06-26 22:10:22 -07:00
karthic rao
ce7d5eddbc Misspell warnings fix (#2001) 2016-06-26 22:05:48 -07:00
Harshavardhana
0d3a9c9438 XL: Add tests for checkSufficientDisks, storageInfo. (#1988) 2016-06-26 19:48:02 -07:00
Harshavardhana
293ba00249 posix: Re-do tests for readDir(). (#1996) 2016-06-26 19:31:53 -07:00
Krishnan Parthasarathi
d0be09fdd3 object: checkETag compares quoted ETags properly (#1997)
Previously, checkETag didn't handle ETags with leading and trailing
double quotes. e.g "abcdef1234" == "\"abcdef1234\"" would return false.
Now, checkETag function canonicalizes the ETags passed as arguments by
removing one leading/trailing double quote.
2016-06-26 18:10:08 -07:00
Remco Verhoef
1e52759c3c fix typo (#1987) 2016-06-26 11:27:04 -07:00
Harshavardhana
9add048f3c erasure: Add erasure encode/decode unit tests. (#1911)
Fixes #1819
2016-06-26 03:32:49 -07:00
Harshavardhana
936a916e78 server: Add connection rate limiter for server. (#1977) 2016-06-26 03:18:07 -07:00
Harshavardhana
57146fbbb8 main: minio --help should print regardless of root. (#1985)
Remove root check entirely.

Fixes #1964
2016-06-26 03:03:52 -07:00
karthic rao
3d02f7471e Benchmarks for various object sizes for FS/XL GetObject (#1984) 2016-06-25 20:22:04 -07:00
karthic rao
b2d9a46cbb Cleaning up/refactoring tests to be more extensible (#1970) 2016-06-25 19:07:44 -07:00
Harshavardhana
42286cba70 XL: Implement new ReadAll API for files which are read in single call. (#1974)
Add a unit test as well.
2016-06-25 14:51:06 -07:00
karthic rao
ed2fdd90b0 fs: Fix GetObject failure to read large blocks. (#1982)
Add relevant test cases as well for verifying this
part of the codebase.

Fixes #1979
2016-06-25 03:03:27 -07:00
Krishna Srinivas
cb1200a66d XL/erasure-read: read disks in parallel. (#1975)
On read failure, fallback to reads from other
disks also happen in parallel.
2016-06-24 18:02:10 -07:00
Krishnan Parthasarathi
a3a310cde8 Moved tree-walk-fs to use tree-walk-pool (#1978) 2016-06-24 16:41:57 -07:00
Bala FA
f625392211 tests: add unit test for posix.readDir() (#1967)
Fixes #1820
2016-06-24 14:32:08 -07:00
Harshavardhana
e8990e42c2 XL: Make allocations simpler avoid redundant allocs. (#1961)
- Reduce 10MiB buffers for loopy calls to use 128KiB.
- start using 128KiB buffer where needed.
2016-06-24 02:06:23 -07:00
Harshavardhana
ff9fc22c72 posix: Mkdir() and OpenFile() should honor umask. (#1972)
Adds two unit tests for validation as well.

Fixes #1965
2016-06-23 20:19:27 -07:00
Harshavardhana
41c089a7e0 XL: Add mis-aligned GetObject() test. (#1960) 2016-06-22 21:42:24 -07:00
Harshavardhana
5725f3c809 Merge pull request #1958 from krisis/unittest/unc-path
Added unit tests for UNC path handling in windows
2016-06-22 17:19:40 -07:00
Harshavardhana
f4830162a4 XL: Format heal should re-allocate new UUIDs not reuse. (#1953)
This patch also supports writing to a temporary file and renaming
rather than appending to an existing file. This helps in avoiding
inconsistent files.
2016-06-22 17:18:31 -07:00
Harshavardhana
e10934a88e bitrot: Start using blake2b algorithm and remove sha512 usage. (#1957)
Fixes #1952
2016-06-22 17:13:26 -07:00
Krishnan Parthasarathi
a07751f61b Added tests to validate disk name length near MAX_PATH 2016-06-22 15:30:41 -07:00
Krishnan Parthasarathi
0766e903e3 Added unit tests for UNC path handling in windows 2016-06-22 15:30:41 -07:00
Harshavardhana
75dddfb2ae Merge pull request #1959 from krishnasrinivas/parallel-reads
Parallel reads in erasure-read
2016-06-22 15:05:35 -07:00
Harshavardhana
9b82e64a11 XL/erasure-read: Avoid memory copy, write to writer directly all the dataBlocks. 2016-06-23 02:06:57 +05:30
Harshavardhana
bdf8738076 lock: Add unit tests for namespace lock (#1922)
Fixes #1821
2016-06-22 12:27:47 -07:00
Krishna Srinivas
d4bea5fbf8 XL/erasure-read: Add Comments and enable bitrot detection. 2016-06-23 00:34:47 +05:30
Krishna Srinivas
17efaaa902 XL/erasure-read: Support parallel reads from disks. 2016-06-23 00:34:47 +05:30
Krishnan Parthasarathi
78ae696749 Added couple of unit-tests to xlObjects (#1950)
* Added couple of units to xlObjects

* Used test_utils for initialize/destroying xlObjects
2016-06-21 15:48:27 -07:00
Harshavardhana
3fa95f5263 docker: Remove unneeded docker files and makefile tags. 2016-06-21 15:31:30 -07:00
karthic rao
ba5bb4a127 TestServer introduction and revamp of functional tests. (#1940)
Allows for easy creation of Test server with temp backend.

changes
2016-06-21 12:10:18 -07:00
Harshavardhana
409b4ddecb api: MakeBucket should set proper bucket location. (#1948)
Fixes #1942
2016-06-20 23:25:18 -07:00
Harshavardhana
ad779a8ba4 XL: Enable tests for large GetObject. (#1947)
Ref #1946
2016-06-20 22:08:36 -07:00
Bala FA
7d757033f2 erasure-readfile: Use chunk size to read from each disk for a block. (#1949)
A block of data is split into data chunk and each data chunk is
written to each disk.  Previously block size was used to read data
chunk which returns corrupted data.

This patch fixes the issue by reading chunk sized data from each disk
and assembles a block.

Fixes #1939
2016-06-20 21:40:10 -07:00
Krishnan Parthasarathi
393c504de0 Renaming a part from tmp namespace needs to be handled different from… (#1944)
* Renaming a part from tmp namespace needs to be handled different from renaming of an object

* Made argument passing in xl.rename and xl.undoRename explicit
2016-06-20 19:11:55 -07:00
Krishnan Parthasarathi
6143c87c3a Make ioErrCount updates go-routine safe (#1943)
* Make ioErrCount updates go-routine safe

* Made ioErrCount int32 instead of *int32

... and implemented StorageAPI on *posix as opposed to posix type.
This is consistent with the thumb-rule that if a value of a type is
modified as part of the interface implementation then we implement the
interface on pointer to that type.
2016-06-20 16:57:14 -07:00
Bala FA
8559e89494 XL: fix getBlockInfo() to return correct end block (#1941)
If requested offset/length of an object is equal to
erasureInfo.BlockSize, getBlockInfo() returns one more block added to
actual end block.  This patch fixes the issue.

This patch also adds unit test for get objects with big files.
2016-06-20 14:23:25 -07:00
Krishna Srinivas
468ca4ccda XL/Unittest: Add testcase for xlMetaV1{} and its methods. (#1938)
fixes #1822
2016-06-20 07:35:41 -07:00
Aakash Muttineni
4ee2136b28 Unit tests for PUT object when object already exists (#1904)
* fs/xl tests for multiple put object requests
* xl fix for put object on directory
* Unit tests fix windows test issue.
2016-06-20 06:18:47 -07:00
Yurii Rashkovskii
80d83220ad INSTALLGO.md mentions Go 1.5+ for OS X (#1936)
However, current requirement is 1.6, so the file has been updated to reflect that.
2016-06-20 06:17:57 -07:00
Bala FA
fb10c09da7 posix-utils: remove unused isValidPath() (#1937) 2016-06-20 06:17:36 -07:00
Bala FA
2f136e92f7 posix: cleanup usage of fmt.Println() (#1934) 2016-06-19 18:52:19 -07:00
Harshavardhana
50d25ca94a XL: Change AppendFile() to return only error (#1932)
AppendFile ensures that it appends the entire buffer. Returns
an error otherwise, this patch removes the necessity for the
caller to look for 'n' return on short writes.

Ref #1893
2016-06-19 15:31:13 -07:00
Harshavardhana
e1aad066c6 XL: CompleteMultipart should ignore last part is 0bytes. (#1931)
Fixes #1917
2016-06-19 14:51:20 -07:00
Bala FA
1ea1dba528 erasure-readfile: write to given Writer than returning buffer. (#1910)
Fixes #1889
2016-06-19 13:35:26 -07:00
Krishna Srinivas
c41bf26712 Unit tests: add unit tests for listv1/v2 for list bucket handler. (#1933)
fixes #1818
2016-06-19 13:33:00 -07:00
Harshavardhana
8c0942bf0d XL: Remove usage of reduceErr and make it isQuorum verification. (#1909)
Fixes #1908
2016-06-18 00:27:51 +05:30
Harshavardhana
7f38f46e20 vendor: update klauspost/reedsomon package with upstream changes. (#1912) 2016-06-17 15:16:26 +05:30
Krishna Srinivas
466a2e01f1 XL/Erasure: Blocksize for object-part should be derived from what was decided during xl.NewMultipartUpload which creates xl.json. (#1920)
fixes #1919
2016-06-17 12:47:15 +05:30
Krishna Srinivas
d31b38aac8 XL/GetObject: pick the xl.json with highest version for metadata information. (#1914)
fixes #1913
2016-06-17 10:56:18 +05:30
Krishna Srinivas
365f80efa3 XL/DeleteObject: delete call on a prefix should not delete the entire tree structure. (#1916)
fixes #1915
2016-06-17 10:48:43 +05:30
Anand Babu (AB) Periasamy
f51d34cedd Do not guess content-type for objects with no extension (#1918) 2016-06-17 10:12:02 +05:30
Krishnan Parthasarathi
129ebbd685 object layer: Send 200 OK and whitespace chars (#1897) 2016-06-16 09:01:06 +05:30
Krishna Srinivas
e2743d05e8 FS: remove .minio directory if .minio/multipart is empty. (#1899)
fixes #1886
2016-06-16 08:50:38 +05:30
Krishna Srinivas
de1c7d33eb XL: appendFile should return error if quorum is not met. (#1898)
Fixes #1890
2016-06-15 00:24:49 +05:30
karthic rao
afc3102488 Adding format.json during FS initialization (#1896) 2016-06-14 14:09:40 +05:30
Harshavardhana
23c88ffb1d XL/format: Fix a bug in checkDisksConsistency. (#1894) 2016-06-14 01:12:15 -07:00
Harshavardhana
ed4fe689b4 posix: Support UNC paths on windows. (#1887)
This allows us to now use 32K paths names on windows.

Fixes #1620
2016-06-13 02:53:09 -07:00
Harshavardhana
4ab57f7d60 server: terminal width should fallback to 80x25. (#1895)
Some environments might disable access to `/dev/tty`, fall
back to '80' in such scenarios.

Move to 'cheggaaa/pb' package for better cross platform
support on fetching terminal width.

Fixes #1891
2016-06-12 19:35:28 -07:00
karthic rao
276282957e Test for Complete Multipart Upload. (#1888) 2016-06-10 18:43:16 +05:30
Harshavardhana
71632b375e docs: Add comments for each data types. (#1881) 2016-06-09 06:24:11 -07:00
Aakash Muttineni
6f3bd76754 api: Add new bucket policy nesting error (#1883)
* Added ErrPolicyNesting which is returned when nesting of policies has occured
* Replaces ErrMalformedPolicy in the case of nesting
* Changed test case in bucket-policy-parser_test.go (ErrMalformedPolicy -> ErrPolicyNesting)
2016-06-09 01:53:56 -07:00
Bala FA
f2765d98a8 XL: set write quorum (no. of disk / 2) + 2 (#1876)
Previously write quorum was set to (no. of disk / 2) + 3.  As per new
change, the write quorum is set to (no. of disk / 2) + 2.  This helps
to accommodate one more failure of disk.
2016-06-08 22:12:36 -07:00
Bala FA
61598ed02f posix: return errFaultyDisk on I/O errors. (#1885)
When I/O error is occured more than allowed limit, posix returns
errFaultyDisk.

Fixes #1884
2016-06-08 22:02:10 -07:00
Krishna Srinivas
1b9db9ee6c FS/PutObject: Read() data should be handled even in case of EOF. (#1864)
Fixes #1710
2016-06-08 22:00:31 -07:00
Harshavardhana
51f3d4e0ca XL/multipart: statPart should ignore errDiskNotFound. (#1862)
startPart should also take uploadId and partName as arguments.
2016-06-07 18:15:04 -07:00
Bala FA
d13e6e7156 XL: return error if DeleteObject() fails. (#1878)
Previously DeleteObject() does not return any error if write quorum is
not available.  This patch fixes the issue by returning errors.
2016-06-07 11:35:03 -07:00
Bala FA
d32f3288f8 XL: return false only if given prefix doesn't exist in all disks (#1877)
Previously xl.isObject() returns false if one of the disk doesn't have
the object.  Its possible that object may be present in another disk.

This patch fixes the issue by returning false only if given prefix
doesn't exist in all disks.

Fixes #1855
2016-06-07 11:02:12 -07:00
Harshavardhana
c5b6cb2420 Merge pull request #1867 from balamurugana/devel
cleanup: remove unused waitCloser.
2016-06-06 20:32:33 -07:00
Bala.FA
2eb6fa3fce cleanup: remove unused waitCloser. 2016-06-07 07:38:18 +05:30
Krishna Srinivas
acc393ba8b XL/tree-walk: Added comments, changed variable names and structure fields to improve code readability. (#1856) 2016-06-05 11:55:45 -07:00
Harshavardhana
37551a2ad3 Merge pull request #1857 from harshavardhana/erasure
erasure: Fix block index matching.
2016-06-05 09:35:31 -07:00
Harshavardhana
c6ac3fa6db erasure: Fix block index matching.
This patch fixes an important issue of block reconstruction upon block corruption.
2016-06-05 06:19:58 -07:00
Harshavardhana
18b3871705 Add erasure code. 2016-06-03 12:50:36 -07:00
Harshavardhana
73ddb5be75 Merge pull request #1850 from harshavardhana/list-rewrite
XL: Implement ListObjects channel and pool management.
2016-06-03 12:30:06 -07:00
Krishna Srinivas
002c5bf7dd XL: Treewalk handle all the race conditions and blocking channels. 2016-06-03 12:17:54 -07:00
Harshavardhana
1cf1532ca3 XL: Implement ListObjects channel and pool management. 2016-06-03 12:17:54 -07:00
Harshavardhana
70a1231f02 Merge pull request #1849 from harshavardhana/multipart
XL/PutObject: Handle all pending cases of DiskNotFound.
2016-06-03 11:58:04 -07:00
Harshavardhana
82fd907933 XL/PutObject: Handle all pending cases of DiskNotFound. 2016-06-03 11:40:44 -07:00
Harshavardhana
f39a6f8df7 Merge pull request #1852 from minio/harshavardhana-patch-1
Fix download link
2016-06-03 01:17:54 -07:00
Harshavardhana
f6013c46ea Fix download link 2016-06-03 00:05:32 -07:00
Harshavardhana
da069a18c4 Merge pull request #1847 from krisis/patch-1
Created ISSUE_TEMPLATE with basic information
2016-06-03 00:01:48 -07:00
Harshavardhana
5108ba6eb1 Merge pull request #1728 from minio/rewrite-xl
XL/FS: Rewrite in new format.
2016-06-02 23:19:17 -07:00
Krishnan Parthasarathi
1213bf9fa1 Created ISSUE_TEMPLATE with basic information
Added an issued template to gather as much useful information that we may need to resolve an issue while the user has access to the relevant details.
2016-06-03 10:31:08 +05:30
Krishna Srinivas
b00ac40c35 XL/PutObject: Calculate size if not provided by the client and update xl.json with the correct size. (#1844) 2016-06-02 17:09:47 -07:00
Harshavardhana
fb95c1fad3 XL: Bring in some modularity into format verification and healing. (#1832) 2016-06-02 16:34:15 -07:00
Krishna Srinivas
aa1d769b1e FS/Multipart: remove uploads.json on complete-multipart if no more uploadIDs are present for the object. (#1843)
Fixes #1835
2016-06-02 15:54:00 -07:00
Krishna Srinivas
611c892f8f FS/Multipart: Lock() to avoid race during PutObjectPart. (#1842)
Fixes #1839
2016-06-02 15:19:13 -07:00
Harshavardhana
67bba270a0 FS: Cleanup and Fix all multipart related operations. (#1836) 2016-06-02 12:18:56 -07:00
Harshavardhana
de21126f7e XL: Re-align the code again. 2016-06-02 01:54:06 -07:00
Harshavardhana
ae311aa53b XL: Cleanup, comments and all the updated functions. (#1830) 2016-06-01 16:43:31 -07:00
Krishna Srinivas
9b79760dcf XL/heal: heal missing format.json on replaced drives. (#1828)
fixes #1817
2016-06-01 16:15:56 -07:00
Bala FA
116b5607d7 server: fix to have readable timeout value (#1823) 2016-06-01 09:14:50 -07:00
Krishna Srinivas
614c770b5d List Objects version 2. (#1815)
object: List Objects v2 support
2016-05-31 22:10:55 -07:00
Harshavardhana
c493ab5d0d XL: Bring in sha512 checksum support. (#1797) 2016-05-31 20:23:31 -07:00
Bala FA
db2fdbf38d erasure: allocate buffer only for non-nil disk (#1811) 2016-05-31 11:55:50 -07:00
Krishna Srinivas
89f65333fb XL/Multipart: Introduce "deleted" field for uploads.json (#1810)
To future proof backend in case #1805 becomes an issue.
2016-05-31 11:54:01 -07:00
Krishna Srinivas
22511dc4c7 XL/Multipart: During list-multipart-uploads ignore errFileNotFound and errDiskNotFound errors. (#1813)
Fixes #1795
2016-05-31 11:53:28 -07:00
karthic rao
1947ae198e Adding read nad write timeout for unresponsive client connectinos (#1809) 2016-05-31 11:53:21 -07:00
Harshavardhana
2e4ab71303 Web: Update with ui changes. (#1808) 2016-05-31 02:01:02 -07:00
Harshavardhana
445dc22118 XL: Cleanup and add more comments. (#1807) 2016-05-30 16:51:59 -07:00
karthic rao
ffc2b3c304 Test for ListObjectParts. (#1802) 2016-05-30 14:36:33 -07:00
Krishnan Parthasarathi
967c2b2940 Handled possible short writes to httpResponseWriter (#1804)
* XL: Handled possible short writes to httpResponseWriter

* Added tests for Range Header combinations
2016-05-30 11:27:15 -07:00
Krishna Srinivas
b466f27705 Nslock fixes (#1803)
* XL/Multipart: Support parallel upload of parts by doing NS locking appropriately.

* XL/Multipart: hold lock on the multipart upload while aborting.
2016-05-30 11:26:10 -07:00
Harshavardhana
a4a0ea605b XL: Fix GetObject erasure decode issues. (#1793) 2016-05-29 15:38:14 -07:00
Harshavardhana
5e8de786b3 XL: Truly use unique id's in temp directory. (#1790)
This also helps in avoiding cleaning up directories after.

Additionally this patch also fixes the problem of Range offsets.
2016-05-29 00:42:09 -07:00
Harshavardhana
feb337098d XL: bring in new storage API. (#1780)
Fixes #1771
2016-05-28 16:12:51 -07:00
Krishnan Parthasarathi
c87f259820 Remove parts that are missing in CompleteMultipartUpload (#1786)
* Remove parts that are missing in CompleteMultipartUpload

* Moved isUploadIDExists under proper namespace locks

* Moved code that deletes part files to a function
2016-05-28 15:15:53 -07:00
karthic rao
7278b90fe1 Adding defer to the lock (#1785) 2016-05-28 15:15:53 -07:00
Krishna Srinivas
41a5b3908b XL/ListParts: take the size from xl.json instead of backend file size as it will be different. (#1781)
Fixes #1779
2016-05-28 15:15:53 -07:00
Krishna Srinivas
3fb0b5e455 XL/Multipart: check existance upload uploadID after lock. (#1778)
Fixes #1767
2016-05-28 15:15:53 -07:00
Harshavardhana
ba8bdec077 XL: ListObjects should not list when delimiter and prefix are '/'. (#1777) 2016-05-28 15:15:53 -07:00
Harshavardhana
27cc8a6529 erasure: read only dataBlocks if we have enough. (#1776)
Reconstruct with parity blocks if we don't have enough data blocks.
2016-05-28 15:15:53 -07:00
Krishnan Parthasarathi
302ec27fa2 Fixed race during parallel PutObjectPart requests (#1775)
The race is between two parallel PutObjectPart requests updating partsInfo in xl.json.
Previously, it was being updated under a RLock().
2016-05-28 15:15:53 -07:00
Krishnan Parthasarathi
5f679d9d1e Rename back multipart objects if read/write Quorum was unavailable (#1773) 2016-05-28 15:15:53 -07:00
Bala FA
51bb613fdf pkg/safe: remove temporary file on failure (#1774) 2016-05-28 15:15:53 -07:00
Harshavardhana
d65101a8c8 XL: Implement strided erasure distribution. (#1772)
Strided erasure distribution uses a new randomized
block distribution for each Put operation. This
information is captured inside `xl.json` for subsequent
Get operations.
2016-05-28 15:15:53 -07:00
Krishna Srinivas
6dc8323684 FS/ListMultipart: Fix FS list-multipart to work for unit test cases. 2016-05-28 15:15:53 -07:00
Krishna Srinivas
616a257bfa XL/Multipart: isMultipartUpload() checks for presence of uploads.json on a random disk. 2016-05-28 15:15:53 -07:00
Krishna Srinivas
3487b3c095 Multipart: Disable FS tests and certain test cases for list-incomplete-uploads. 2016-05-28 15:15:53 -07:00
Karthic Rao
1f51af6f37 Listmultipart tests. 2016-05-28 15:15:53 -07:00
Krishna Srinivas
b1e2b7dea2 Fix list-incomplete uploads for XL. 2016-05-28 15:15:53 -07:00
Harshavardhana
34e9ad24aa XL: Introduce new API StorageInfo. (#1770)
This is necessary for calculating the total storage
capacity from object layer. This value is also needed for
browser UI.

Buckets used to carry this information, this patch
deprecates this feature.
2016-05-28 15:15:53 -07:00
Harshavardhana
b2293c2bf4 XL: Rename, cleanup and add more comments. (#1769)
- xl-v1-bucket.go - removes a whole bunch of code.
- {xl-v1,fs-v1}-metadata.go - add a lot of comments and rename functions
   appropriately.
2016-05-28 15:15:53 -07:00
Harshavardhana
553fdb9211 XL: Bring in support for object versions written during writeQuorum. (#1762)
Erasure is initialized as needed depending on the quorum and onlineDisks.
This way we can manage the quorum at the object layer.
2016-05-28 15:15:53 -07:00
Harshavardhana
cae4782973 XL: explicit deleteObject is not needed after rename failure. (#1760)
Reason is renameObject() does deleteObject() upon writeQuorum
failure if not keeps the successfully renamed parts if we have
reached readQuorum.
2016-05-28 15:15:53 -07:00
Krishnan Parthasarathi
3550660163 Return error for empty parts in multipartupload complete (#1758) 2016-05-28 15:15:53 -07:00
Harshavardhana
a4771265cf XL: Abortmultipart should update uploads.json properly. (#1757) 2016-05-28 15:15:53 -07:00
Harshavardhana
a9e778f460 XL/fs: initObjectLayer should cleanup tmpMetaPrefix in parallel. (#1752)
Fixes #1747
2016-05-28 15:15:53 -07:00
Harshavardhana
ee6645f421 XL: Add additional PartNumber variable as part of xl.json (#1750)
This is needed for verification of incoming parts and to
support variadic part uploads. Which should be sorted
properly.

Fixes #1740
2016-05-28 15:15:53 -07:00
Harshavardhana
a97230dd56 XL/erasure: Reset dataBlocks to reduce the memory usage. (#1749)
Fixes #1748
2016-05-28 15:15:53 -07:00
Harshavardhana
1e393c6c5b XL: Add new metadata for checksum. (#1743) 2016-05-28 15:15:53 -07:00
Krishna Srinivas
b38b9fea79 XL/erasure: fix for skipping 0 padding. (#1737)
Fixes #1736
2016-05-28 15:15:53 -07:00
Krishna Srinivas
6d84e84b3c XL/mutltipart: fix partnumber to partname association. (#1739)
Fixes #1738
2016-05-28 15:15:53 -07:00
Harshavardhana
a00a5c6e7e XL: Multipart update uploads.json properly. (#1741) 2016-05-28 15:15:53 -07:00
Harshavardhana
ed43d5e02b No need to delete file inside erasure code (#1732) 2016-05-28 15:15:53 -07:00
Harshavardhana
293d246f95 XL/FS: Rewrite in new format. 2016-05-28 15:15:53 -07:00
Anand Babu (AB) Periasamy
63c65b4635 filter GOPATH from stack trace (#1755) 2016-05-25 02:32:35 -07:00
Harshavardhana
64b0976e1b Remove probe and tasker. (#1733)
Fixes #1717
2016-05-24 18:43:33 -07:00
Aakash Muttineni
b48b2e7f7c Part ID check (#1730)
* Added check in PutObjectPartHandler to make sure part ID does not exceed 10000. ErrInvalidMaxParts written to response if part ID exceeds the maximum value.
2016-05-24 01:52:47 -07:00
Krishnan Parthasarathi
584813e214 Used MINIO_PROFILE_DIR for saving profile information of a minio server (#1722)
To specify the directory where profiling information should be saved
  ```
    export MINIO_PROFILE_DIR=/path/to/profile/dir
  ```
By default, profiling information would be saved in a directory created
using ioutil.TempDir, which would be displayed in stdout on starting the
minio server.
2016-05-22 22:11:39 -07:00
Krishnan Parthasarathi
3a980eac1a Fix shadowing of err variable (#1718) 2016-05-21 00:43:47 -07:00
Harshavardhana
5a4b074ca0 XL: PutObject incorrectly returned after deleting multipart object. (#1715)
Fixes #1714
2016-05-20 17:27:48 -07:00
Harshavardhana
f76d975304 xl: StatVol and ListVols should handle cases when disks are missing. (#1703)
Remove unnecessary code, handle cases of missing disk.
2016-05-20 16:45:53 -07:00
Harshavardhana
6015a7a3cd Add mention-bot config 2016-05-20 13:53:15 -07:00
Krishna Srinivas
b83b87a7f6 XL/Incompleteuploads: list should save the tree-walk go routine to the map if eof is not reached. (#1695) 2016-05-20 12:09:21 -07:00
Krishna Srinivas
5b95f097d4 multipart: listing does not skip uploadIDmarker. (#1708)
Fixes #1706
2016-05-20 11:48:28 -07:00
Harshavardhana
e4240aa58f XL/objects: Initialize format.json outside of erasure. (#1640)
Fixes #1636

New format now generates a UUID and includes it along with
the order of disks. So that UUID is the real order of disks
and on command line user is able to specify disks in any order.

This pre-dominantly solves our dilemma.
```
{
   "format" : "xl",
   "xl" : {
      "version" : "1",
      "disk": "00e4cf06-5bf5-4bb5-b885-4b2fff4a7959",
      "jbod" : [
         "00e4cf06-5bf5-4bb5-b885-4b2fff4a7959",
         ....
         "c47d2608-5067-4ed7-b1e4-fb81bdbb549f",
         "a543293e-99f1-4310-b540-1e450878e844",
         "18f97cbe-529a-456a-b6d4-0feacf64534d"
      ]
   },
   "version" : "1"
}
```
2016-05-20 02:22:22 -07:00
Andreas Linz
f5dfa895a5 Exit with error code if minio server fails to start (#1704)
This commit replaces the call to `errorIf` with `fatalIf`, so that the
minio server exits with an non-zero exit status if something fails, e.g.
the port was already openend by another process.
2016-05-20 02:05:52 -07:00
Harshavardhana
50c328ff19 XL: RenameFile should rename and cleanup on writeQuorum. (#1702)
Fixes #1683
2016-05-20 01:56:46 -07:00
Anand Babu (AB) Periasamy
b8405ca172 simplify profiler cleanup 2016-05-19 19:19:32 -07:00
Harshavardhana
f6d9e73548 posix: Do not lowercase names, return as is. Object layer will filter them out. (#1699) 2016-05-19 18:52:55 -07:00
Harshavardhana
7ae5470395 XL: simplify isMultipartObject not need to handle unknown errors. (#1686)
Unknown errors are just logged with errorIf.
2016-05-19 17:10:33 -07:00
Harshavardhana
9fdb69563d handler: CopyObject should save metadata. (#1698)
- Content-Type
- Content-Encoding
- ETag

Fixes #1682
2016-05-19 17:10:08 -07:00
Harshavardhana
00d0558131 XL: Enable tests for content-type for Head and Get
Fixes #1674
2016-05-19 15:34:18 -07:00
Bala.FA
13e4618309 XL/fs: Return saved content-type during GetObject
Fixes #1674
2016-05-19 15:34:18 -07:00
Krishnan Parthasarathi
2f05aacbf2 Stop profiling on exit of main goroutine (#1670)
* Stop profiling on exit of main goroutine

Previously, profiling was stopped since Stop() method was called on exit of cli.BeforeFunc.
This lead to profiling to be stopped prematurely.

* Moved profiling switch statement to a separate func
2016-05-19 14:50:54 -07:00
Krishna Srinivas
dc36594ef4 XL/heal: Should skip healing if CreateFile() failed on the part which needed healing. (#1693)
Fixes #1684
2016-05-19 11:32:47 -07:00
Krishna Srinivas
537568f9f9 XL/ListVols: Fix panic. Skip if slice is nil. (#1694)
Fixes #1692
2016-05-19 11:32:19 -07:00
Harshavardhana
62b4fd6964 XL: Close the reader properly. 2016-05-18 20:16:19 -07:00
Harshavardhana
7d6ed50fc2 objects: Save all the incoming metadata properly. (#1688)
For both multipart and single put operation
2016-05-18 19:54:25 -07:00
Frank
af85acf388 Added ETag as an exposed header (required for multi part uploads) (#1681)
* Added ETag as an exposed header (required for multi part uploads)

* Fix formatting on adding ETag for exposed headers
2016-05-18 19:17:32 -07:00
Frank
a4fef436c8 Fix formatting for adding ETag for exposed headers (#1687) 2016-05-18 19:17:17 -07:00
Harshavardhana
404364ba73 XL/fs: ListMultipartUploads should list only requested entries. (#1668)
Fixes #1665
2016-05-18 15:06:29 -07:00
Krishna Srinivas
3c1ef3fee2 Locking: move locking code from xl-erasure to xl-objects. (#1673)
Fixes #1639 #1649 #1650 #1624
2016-05-18 15:05:23 -07:00
Bala FA
a0865122a7 XL/objectLayer: Save additional meta data during PutObject (#1672)
Fixes  #1602
2016-05-18 13:56:11 -07:00
Krishna Srinivas
824c8a39f1 XL/Multipart: If the part is already uploaded ignore the newly uploaded part. (#1677)
Fixes #1644
2016-05-18 13:37:28 -07:00
Krishna Srinivas
71b9341fc7 XL/Multipart: Cleanup uploads.json after abort-multipart-upload. (#1678)
Fixes #1663
2016-05-18 13:30:58 -07:00
Harshavardhana
b69a97aed4 server: Set rLimit properly to the max. (#1676)
4000 is too small to handle 500 go-routines.

Fixes #1666
2016-05-18 11:34:24 -07:00
karthic rao
2da34e4668 vendor changes to pkg/profile (#1671) 2016-05-18 09:22:06 -07:00
Harshavardhana
4bc923e63b XL/fs: Optimize calling isBucketExist() (#1656)
* posix: Avoid using getAllVolumeInfo() in getVolumeDir()

This is necessary compromise to avoid significant slowness this
causes under load. The compromise is also substantial in a way
so that to avoid penalizing common cases v/s special cases.

For buckets with Caps on Unixes, we filter buckets based on the
latest anyways, so this is completely acceptable.

* XL/fs: Change the usage of verification of existance of buckets.

Optimize calling isBucketExists, it is not needed for all call
paths. isBucketExist should be called only for calls which use
temporary volume location for operations, for the rest rely on
the errors returned on their original call path.

Remove usage of filtering as well across all volume names.
2016-05-17 21:22:27 -07:00
Harshavardhana
4214da65af XL/fs: MakeVol replies should be consistent. (#1667)
Fixes #1658
Fixes #1633
2016-05-17 18:41:17 -07:00
Krishnan Parthasarathi
596fe65e84 Write pprof output files under config dir supplied (#1660)
Since config dir, supplied as command line argument, is parsed after pprof
output directory is determined, pprof output files are  written in ~/.minio/profile
directory instead of <configDir>/profile/. This change fixes this behaviour.
2016-05-17 11:44:40 -07:00
Krishna Srinivas
39865c0d2e XL/Multipart: Fix list multipart output. (#1661)
Fixes #1541
2016-05-17 11:44:32 -07:00
Harshavardhana
1760687c83 XL: Make sure to create proper temporary files for renames to succeed. (#1654)
Renames work in a special manner, temporary location files should
be created properly.

Fixes #1653
Fixes #1651
2016-05-16 15:40:57 -07:00
Harshavardhana
9472299308 logging: Log only for unhandled errors, remove all the debug logging. (#1652)
This patch brings in the removal of debug logging altogether, instead
we bring in the functionality of being able to trace the errors properly
pointing back to the origination of the problem.

To enable tracing you need to enable "MINIO_TRACE" set to "1" or "true"
environment variable which would print back traces whenever there is an
error which is unhandled or at the handler layer.

By default this tracing is turned off and only user level logging is
provided.
2016-05-16 14:31:28 -07:00
Harshavardhana
8828fd1e5c vendor: Remove unused packages. 2016-05-15 00:04:56 -07:00
Harshavardhana
7de206cb85 XL: ListVols should provide consistent view. (#1648)
Additionally get list of all volumes in parallel for aggregation
and quorum verification.

Fixes #1647
2016-05-14 23:57:57 -07:00
Harshavardhana
498ce1e9bb handler: Add a waitgroup to avoid expect100Continue crash. (#1623)
This waitgroup allows for safe blocking operation where we can cleanly
control the flow of the writes and the underlying pipe altogether.

Fixes #1553
2016-05-14 17:18:00 -07:00
Harshavardhana
5b29cefd40 api: DeleteObject should always return 204. (#1645)
Fixes #1643
2016-05-14 15:47:19 -07:00
Hori Ryota
e03ebfd13b Add default cmd (#1625) 2016-05-14 02:58:50 -07:00
Harshavardhana
74c23a3544 docs: Move developer docs from top-level to its own directory. (#1642) 2016-05-14 02:47:16 -07:00
Harshavardhana
025054fb36 XL: CreateFile/ReadFile should write and read from all disks in parallel. (#1612)
* XL: CreateFile should write to all disks in parallel.

* XL: ReadFile should read from all disks in parallel.
2016-05-14 01:57:04 -07:00
Bala FA
7264cd2ab3 Fix error message when wrong set of disks are passed (#1634)
Previously when wrong set of disks are given with last minio server
run, it throws unclear error message.  This is fixed by returning
appropriate errors.

Fixes #1591
2016-05-13 23:04:10 -07:00
Harshavardhana
0e4e9c4bc1 XL: ListDir should return each List from a random disk in the set. (#1613)
Fixes #1609
2016-05-13 18:12:26 -07:00
Krishna Srinivas
8099396ff0 xl/putObject: Should take care of the situation if an object already exists at the location. (#1606)
Fixes  #1598 #1594 #1595
2016-05-13 11:52:36 -07:00
Krishna Srinivas
d267696110 Validation: Reject object names with trailing "/". (#1619)
Fixes #1616
2016-05-13 11:43:06 -07:00
Bala FA
43539a0c86 posix: parseDirents() should follow symlink and get values. (#1631)
Previously parseDirents() ignores symbolic links.  This patch fixes
the issue by following the symlink using os.Stat().

Fixes #1545
2016-05-13 11:39:48 -07:00
Krishnan Parthasarathi
9e45d138cc Closed readCloser for each multipart in xl.GetObject (#1629)
* Closed readCloser for each multipart in xl.GetObject
2016-05-13 04:50:13 -07:00
karthic rao
ee8605e333 Make bucket failure fix with high concurrent load (#1630) 2016-05-13 04:03:38 -07:00
karthic rao
e4958f9757 Removing regexp check and adding string based check, regexp check was unnecessary here (#1627) 2016-05-13 03:33:53 -07:00
Harshavardhana
b62774d32f storage/xl: Return errVolumeAccessDenied if disks cannot be accessed. (#1621)
Fixes #1614
2016-05-12 21:01:11 -07:00
koolhead17
d6e0f3ab33 added awscli commands & minor typo fix. (#1578) 2016-05-12 16:35:11 -07:00
Bala FA
3ff0a56e62 XL: Ignore errDiskNotFound in certain situations (#1610)
When a disk is removed while an operation is going on
(eg. single/multipart put object, list/multipart list objects etc),
its required to ignore errDiskNotFound error and continue the
operation.

Fixes #1552
2016-05-11 23:42:14 -07:00
Harshavardhana
50431e91a6 erasure: Handle failed disks so that we initialize properly if they are missing. (#1607)
Fixes #1592
Fixes #1579
2016-05-11 18:58:32 -07:00
Harshavardhana
d4745c7d6a object: PutObjectHandler should set the md5Sum properly. (#1604)
Additionally add a test case as well for validating for us
to reply BadDigest properly.

Fixes #1603
2016-05-11 16:13:37 -07:00
Bala FA
adbcafefad xl/CreateFile: handle errFileNameTooLong error properly (#1523)
When errFileNameTooLong error is returned from posix, xl.CreateFile()
treats the error specially by returning the same error immediately.

Fixes #1501
2016-05-11 12:55:02 -07:00
Harshavardhana
86e5d71519 erasure: MakeVol, DeleteVol and StatVol should hold locks. (#1597)
Since there is a good amount of overlap, each code has to lock
properly for the operation they are going to perform.

- MakeVol create vols in a routine on all disks, hold locks.
- DeleteVol delete vols in a routine on all disks, hold locks.
- StatVol stat vols in a routine on all disks, hold locks.

Fixes #1588
2016-05-11 12:54:21 -07:00
Harshavardhana
72748d2073 erasure: healVolume err should be different from shadowed version. (#1590)
Multiple go-routines updating the same shadowed variable can
cause a data race, avoid it by using its own err variable.

Fixes #1589
2016-05-11 01:36:09 -07:00
Harshavardhana
49141eb3e4 http: Remove minhttp package and use standard Golang http. (#1587)
The functionality provided by minhttp will be implemented
cleanly through our own APIs. Since we are not going to
send SIGUSR2 and manage configuration in that manner, it
doesn't make sense to use minhttp.

Fixes #1586
2016-05-10 18:03:00 -07:00
Harshavardhana
d1fa1d9352 Remove binary files from previous commit. 2016-05-10 15:49:17 -07:00
karthic rao
26e2c4bf4d Replacing fastsha256 with crypto/sha256 package from golang standard package (#1584) 2016-05-10 14:20:11 -07:00
Krishna Srinivas
b044336329 XL/GetObject: If the offset does not fall in the first "dataBlock" it gives incorrect data. (#1583)
Fixes #1582
2016-05-10 11:38:49 -07:00
Krishna Srinivas
e99cb05516 XL/GetObject: offset should be reset to 0 after reading first part. (#1580) (#1581) 2016-05-10 10:38:12 -07:00
Krishna Srinivas
409e09c1e5 XL/Selfheal: skip reading from disk if ReadFile had returned error. (#1575) 2016-05-10 01:24:58 -07:00
Krishna Srinivas
c314a98c1a XL/list: fix panic on list when a disk is down. (#1562) 2016-05-10 00:35:29 -07:00
Harshavardhana
5f0ca64346 erasure: listOnlineDisks should return errFileNotFound for errReadQuorum. (#1573)
Fixes #1571
2016-05-10 00:10:34 -07:00
Harshavardhana
0db3218d5d xl: getPartsMetadata fetch parts and decode in go-routine. (#1569)
Ref #1516
2016-05-09 23:51:05 -07:00
Harshavardhana
eec41c369c posix: Return diskNotFound error rather than errVolumeNotFound (#1568)
Fixes #1559
2016-05-09 18:57:39 -07:00
Harshavardhana
b66c3bf35e server: Enable server profiling as needed. (#1565) 2016-05-09 16:18:56 -07:00
Harshavardhana
f733120d3d xl: CompleteMultipartUpload make sure to delete uploads.json (#1539)
Fixes #1537

Ref #1540 - for missing functionality in this patch.
2016-05-09 12:09:48 -07:00
Krishna Srinivas
6627388dc3 posix: remove dead code related to posix reserved suffixes. (#1555) 2016-05-09 11:40:30 -07:00
Harshavardhana
9d41414fb5 posix: reserved files should be filtered out at posix not object layer. (#1554) 2016-05-09 02:53:08 -07:00
Harshavardhana
722abe2d0f xl/fs: pathJoin now takes variadic inputs. (#1550)
Retains slash for the last element.

Fixes #1546
2016-05-09 00:46:54 -07:00
Krishna Srinivas
04a5b25929 Multipart: Minimum part size limit does not apply to the last part during CompleteMultipartUpload. (#1518) (#1538) 2016-05-08 23:49:49 -07:00
Harshavardhana
90ea494338 erasure: waitCloser should implement CloseWithError. (#1543)
This is needed so that the other end of the pipe receives
and error, cleanups temporary files.
2016-05-08 16:26:10 -07:00
Harshavardhana
a8fdd04e62 erasure: ReadFile should honor proper offsets. (#1542)
Fixes #1535
2016-05-08 15:39:24 -07:00
Harshavardhana
76c511c9fe api: Extend S3 errors with Minio errors. (#1533)
Fixes #1530
2016-05-08 12:36:16 -07:00
Krishna Srinivas
75320f70d0 multipart: reject part upload if size is less than 5MB. (#1518) 2016-05-08 12:06:05 -07:00
Krishna Srinivas
88e1c04259 XL/ListDir: break out of loop if list on one disk is a success. (#1534) 2016-05-08 12:05:19 -07:00
Krishna Srinivas
a205aca6d2 init: Cleanup .minio/tmp directories recursively. Also takes care of cleaning up of parts directory during abortMultipartUpload. (#1532) 2016-05-08 10:15:34 -07:00
Harshavardhana
3f51dd4fd4 xl: CompleteMultipartUpload should rename files in a routine. (#1527)
This solves the client timeout while renaming 9000+ parts.

Fixes #1526
2016-05-08 02:38:35 -07:00
Harshavardhana
56b7df90e1 xl/fs: ListObjectParts should set nextPartNumberMarker properly. (#1528)
For list requests on parts more than 1000, would lead to an infinite
loop.

Fixes #1522
2016-05-08 02:21:12 -07:00
Harshavardhana
a56d5ef415 xl/fs: isFunctions should only return boolean. (#1525)
log the unrecognize errors.
2016-05-08 01:58:05 -07:00
Harshavardhana
937d68202d server: Deadcode removal. (#1517) 2016-05-07 21:47:33 -07:00
Harshavardhana
bf563afb80 xl: DeleteObject regression over errChannel. (#1521)
DeleteObject would hang indefinitely - fix it.
2016-05-07 12:48:12 -07:00
Harshavardhana
091c1e8456 copyObject: No need to verify md5sum. (#1520)
Multipart objects are kept in non hex md5sum format.
This format doesn't comply with hex, so decoding
would fail invariably.

This is not necessary to validate and its not expected
error during a CopyObject operation.

Fixes #1519
2016-05-07 03:43:08 -07:00
Harshavardhana
751fa972f5 xl/fs: Multipart re-org introduce "uploads.json" (#1505)
Fixes #1457
2016-05-07 02:08:03 -07:00
Harshavardhana
434423de89 xl: Move format detection inside xl objects. (#1515)
Fixes #1449
2016-05-07 00:59:43 -07:00
Harshavardhana
a20ccb1e83 server: Print proper endpoint, along with https if configured. (#1514)
Fixes #1492
2016-05-06 21:18:29 -07:00
Harshavardhana
0625c050e6 xl/tests: Enable server handler tests over XL. (#1512)
Fixes #1513
2016-05-06 16:47:23 -07:00
Harshavardhana
0b74f5624e xl: Fix how we deal with read offsets at erasure layer. (#1511)
Requires skipping necessary parts of dataBlocks during
decoding phase and requires us to properly skip the
entries as needed.

Thanks to Karthic for reproducing this important issue.

Fixes #1503
2016-05-06 16:25:08 -07:00
Krishna Srinivas
c06b9abc15 bucket-handlers: do not unescape marker as gorilla layer would have already done it. (#1495) (#1510) 2016-05-06 16:04:46 -07:00
Krishna Srinivas
a5d31d4254 XL/ListObjects: use string.TrimSuffix instead of Trim. (#1498) (#1509) 2016-05-06 13:28:55 -07:00
karthic rao
20ca65c793 Cleanup: mispell fixes 2016-05-06 12:32:44 -07:00
karthic rao
0b4bbe6d9e Adding XL Object layer validation for existing unit tests of single node (#1499)
object layer.

Adding isBucketExist check for GetObjectInfo in the XL layer.
2016-05-06 11:57:04 -07:00
Krishna Srinivas
48d3be36da XL/ListObjects: Fix ordering issue during listing if the files were uploaded as multipart uploads. (#1498) (#1506)
i.e if two files "tmp" and "tmp.1" are uploaded as multipart we would list ""tmp.1"" before ""tmp"" as "tmp.1/" < "tmp/"
2016-05-06 10:19:09 -07:00
Harshavardhana
5133ea50bd xl/fs: Make i/o operations atomic. (#1496) 2016-05-05 20:28:22 -07:00
Harshavardhana
17868ccd7f handlers: overhaul entire writErrorResponse, simplify. (#1472) 2016-05-05 20:24:29 -07:00
Harshavardhana
ba5805e60a bucketPolicy: Do not use regexes, just do prefix matches. (#1497)
AWS arn supports wildcards and this is flat namespace, simple
prefix matching is fine.

Fixes #1481
Fixes #1482
2016-05-05 19:58:48 -07:00
Harshavardhana
ca097de96c xl/fs: Add initObjectLayer function. (#1494)
Fixes #1493
2016-05-05 15:00:03 -07:00
Bala FA
658a595d7a xl-erasure: RenameFile should support quorum. (#1487)
Fixes #1463
2016-05-05 13:03:53 -07:00
Krishna Srinivas
247e835d7b object: move go-routine listing from posix to objectLayer. (#1491) 2016-05-05 12:51:56 -07:00
Harshavardhana
46680788f9 xl/fs: cleanup '/.minio/tmp' directory on each initialization. (#1490) 2016-05-05 01:54:43 -07:00
Harshavardhana
ad40036cba posix: filepath shouldn't be used anymore use path.Join (#1486) 2016-05-05 01:39:26 -07:00
Harshavardhana
82fbe908a3 object: DeleteBucket should return proper error for BucketNotEmpty. (#1489)
Fixes #1488
2016-05-05 01:35:21 -07:00
Harshavardhana
f145e1042f quick/config: No need to use Data() with type assertion. (#1480)
Since input to quick.New() is a pointer the unmarshalled value
internally already has the value, subsequent type assertions
are not needed.

Thanks to Bala for finding this behavior.

Fixes #1475
2016-05-04 20:22:15 -07:00
Rajiv Makhijani
9dccbd6478 New Dockerfile for building & running minio inside Docker (inc. autobuild support) (#1473) (#1485) 2016-05-04 17:07:19 -07:00
karthic rao
82113b747c Resource matching fix to overcome issues with regular expression based match (#1476) 2016-05-04 16:56:57 -07:00
Rajiv Makhijani
a5959789d5 Make minimum file space percent a constant (#1484) 2016-05-04 15:39:06 -07:00
Harshavardhana
6988ed9257 xl/getObjectInfo: Returns back proper size, modTime and md5Sum. (#1479)
Fixes #1469
2016-05-04 15:28:58 -07:00
Rajiv Makhijani
321aefa026 Add Response for PostPolicyBucketHandler (#1477) (#1483) 2016-05-04 15:24:10 -07:00
Harshavardhana
dd417e5476 fs: Handle cases of PutObject for an existing prefix. (#1478) 2016-05-04 12:18:40 -07:00
Bala FA
da3a53376c server: save and compare multiple disks are used (#1474)
When server is run with multiple disks which uses xl interface where
order and count of disks are important, this patch saves such disks
configuration and compares in next run if there is a mismatch.

Fixes #1458
2016-05-04 12:18:20 -07:00
Harshavardhana
e4d89d8156 xl/deleteObject: Support deleting special multipart object. (#1470)
Fixes #1452
2016-05-03 17:52:54 -07:00
Harshavardhana
d0e854afb7 xl/fs: Bring in ".minio/tmp" directory support. (#1464)
All transactions happen through this directory inside ".minio/temp".
Adding this allows us to remove any temporary files which were not
committed before.

Fixes #1462
Fixes #1444
2016-05-03 16:10:24 -07:00
Harshavardhana
6f1811ee4d config: Migration should save region properly. (#1468)
Fixes #1466
2016-05-03 15:17:58 -07:00
Yurii
bba5468368 minio: Replace 'bucket already exists' error by 'bucket already owned by you'. (#1465)
S3 API returns BucketAlreadyExists error when some another user has such bucket.
If user that creates the bucket already has it, s3 returns BucketAlreadyOwnedByYou.
As minio has only one user, it should behave accordingly.
Otherwise it causes failures in the applications that ignore creation of already existing bucket in the account, but fail when bucket name is used by someone else.
2016-05-03 03:19:04 -07:00
Harshavardhana
7ae40eb1bb minhttp: Remove probe usage, move to golang error. (#1459)
Fixes #1454
2016-05-03 01:07:34 -07:00
Harshavardhana
ad8e27a963 xl: Rename 'xl.json' to 'file.json' (#1461)
Fixes #1460
2016-05-02 17:42:01 -07:00
Harshavardhana
ac7a7cec20 bucket-policy: Delete policy should remove policy properly. (#1456) 2016-05-02 16:58:10 -07:00
Harshavardhana
afd59c45dc xl/fs: Move few functions into common code. (#1453)
- PutObject()
- PutObjectPart()
- NewMultipartUpload()
- AbortMultipartUpload()

Implementations across both FS and XL object layer
share common implementation.
2016-05-02 16:57:31 -07:00
Harshavardhana
3bf3d18f1f rpc/client: Implement RenameFile properly. (#1443) 2016-05-02 03:12:18 -07:00
Harshavardhana
8102a4712a xl/metadata: Keep the json erasure tag consistent. (#1447)
Currently the on-disk json has "Erasure" we should
keep it consistent name and move to lower case instead.
2016-05-02 01:13:07 -07:00
karthic rao
2393a3a0be XL non-recursive fix (#1450) 2016-05-01 23:16:44 -07:00
Harshavardhana
d006129fb5 xl/vol: Add healing and quorum support for StatVol, MakeVol.
Fixes #1437
2016-05-01 18:42:00 -07:00
Harshavardhana
7caa82f32f xl/fs: Rename minioMetaVolume to minioMetaBucket. (#1442) 2016-05-01 18:13:10 -07:00
Krishna Srinivas
286de4de2c XL - fixes mostly related to multipart listing. (#1441)
* XL/Multipart: Use json.NewDecoder to decode read stream.

* XL/Multipart: fix recursive and non-recursive listing.

* XL/Multipart: Create object part with md5sum later using RenameFile.

* XL/Multipart: ListObjectParts should list parts in order.

previously: uploadID.10.md5sum < uploadID.2.md5sum
fix       : uploadID.00010.md5sum > uploadID.00002.md5sum

* XL/Multipart: Keep the size of each part in the multipart metadata file to avoid stats on the parts.

* XL/Multipart: fix listing bug which was showing size of the multipart uploaded objects as 0.
2016-05-01 17:52:16 -07:00
Harshavardhana
ba7a55c321 xl: ReedSolomon code fix small file erasure bug. (#1431)
For files less than 'dataBlocks', erasure encoding would fail
with short data due to a bug in the implementation itself.

Relax the error return, even a single byte can be properly
erasure coded without issues.

Fixes #1413
2016-05-01 15:30:40 -07:00
Harshavardhana
e05aa762a9 fs: Create object part with md5sum later using RenameFile. (#1440)
Fixes #1340.
2016-05-01 14:50:30 -07:00
Krishna Srinivas
0c27d8e5b1 XL/Multipart: maintain the parts info in multipart.json after complete-multipart-upload. (#1436) 2016-05-01 01:25:48 -07:00
Harshavardhana
443ec37765 xl: Add disk usages properly for ListVols() and StatVol(). (#1435) 2016-04-30 23:34:43 -07:00
Bala FA
d5df8b8b8d xl: remove unused err return in listFileVersions() (#1434) 2016-04-30 03:21:54 -07:00
Harshavardhana
ac2933c799 windows: Enable erasure test for windows. (#1432)
Fixes #1363
2016-04-30 02:52:23 -07:00
Bala FA
84afec9ae0 xl: fix DeleteFile() removing meta data files without updating it (#1433)
Fixes #1428 #1427
2016-04-30 02:52:15 -07:00
Harshavardhana
27c50a70cc obj: support object names with curly braces. (#1429)
Example files like

```
/usr/share/mozilla/extensions/{ec8030f7-c20a-464f-9b0e-13a3a9e97384}/ubufox@ubuntu.com.xpi
Song's Son.ogg
```

Should be supported.
2016-04-29 20:19:08 -07:00
Bala FA
a978975eea xl: add quorum support for DeleteFile() (#1426)
Fixes #1396
2016-04-29 18:43:18 -07:00
Harshavardhana
dc45ea3946 log: Fix file logging, enable it properly. (#1424)
Fixes #1419
2016-04-29 17:54:02 -07:00
Harshavardhana
9eb56f0676 xl/healFile: Handle errors and continue (#1425)
Fixes #1354
2016-04-29 17:52:49 -07:00
Harshavardhana
10a010c1ad xl/fs: Object layer - keep common functions into single place. (#1423) 2016-04-29 17:52:17 -07:00
Harshavardhana
a9935f886c vendor: update reedsolomon package with new perm improvements. (#1422) 2016-04-29 14:59:03 -07:00
Harshavardhana
4e34e03dd4 xl/fs: Split object layer into interface. (#1415) 2016-04-29 14:24:10 -07:00
nomadlogic
4d1b3d5e9a docs: FreeBSD minio source intructions (#1421)
* Modifications of documentation for using and building minio server on FreeBSD.

- update example of enabling compression to use lz4 vs gzip and provide
  explanation of benefits of lz4

- provide walkthrough of building minio server on FreeBSD with binary
  golang and gmake

* Fixing markdown syntax for code blocks so we render correctly.

* typo fix

* reword compression enablement docs for easier reading
2016-04-29 13:35:20 -07:00
Krishna Srinivas
7066ce5160 XL/Multipart: rename the parts instead of concatenating. (#1416) 2016-04-29 12:17:48 -07:00
Krishna Srinivas
39df425b2a lock: bug fixes. (#1420)
* release Lock on map before trying to Lock NS
* delete NS lock from map if no more refs.
* refactor to avoid repetition of code.
2016-04-29 11:39:20 -07:00
Harshavardhana
984903cce1 server: Add global namespace lock. (#1398)
Fixes #1393
2016-04-29 01:29:09 -07:00
karthic rao
8deddb82f4 Cleanup: Moving IsValidLocationContraint to handler utils 2016-04-28 20:01:11 -07:00
Harshavardhana
a1a667ae5d xl: Change fileMetadata to xlMetadata. (#1404)
Finalized backend format

```
{
    "version": "1.0.0",
    "stat": {
        "size": 24256,
        "modTime": "2016-04-28T00:11:37.843Z"
    },
    "erasure": {
        "data": 5,
        "parity": 5,
        "blockSize": 4194304
    ],
    "minio": {
        "release": "RELEASE.2016-04-28T00-09-47Z"
    }
}
```
2016-04-28 19:27:02 -07:00
Harshavardhana
41b35cff7b xl: Fixes a bug in read quorum ListFiles() (#1412)
Fixes a bug in #1406
2016-04-28 17:32:46 -07:00
Harshavardhana
eed756777b object: Allow '[' and ']' as part of object names. 2016-04-28 13:29:32 -07:00
Harshavardhana
2ac10209cc xl: ListFiles - return sorted files. (#1408)
Fixes #1407
2016-04-28 01:48:57 -07:00
Bala FA
5bd6b0b510 xl: check read quorum for ListFiles() (#1406)
Fixes #1364
2016-04-27 21:09:26 -07:00
karthic rao
1813e9c070 Cleanup - Comments and readability fixes (#1386) 2016-04-27 19:28:13 -07:00
Scott McClellan
f87a19a15c Minor changes to CONTIRBUTING.md instructions (#1403) 2016-04-27 16:57:16 -07:00
Harshavardhana
5fffd558d0 xl/heal: Make healFile non-blocking for StatFile and ReadFile. (#1399)
Fixes #1355
2016-04-27 15:10:19 -07:00
Harshavardhana
b51bef85e6 objectapi: ListMultipart now lists more than 1000 entries in non-recursive. (#1397)
Having keyMarker with "/" is a valid marker.

Fixes #1394
2016-04-27 13:19:48 -07:00
Krishna Srinivas
d0e5470050 ListMultipart fixes (#1392)
* ListMultipart: listLeafEntries() - return earlier if a directory is found.
* ListMultipart: do listLeafEntries() only for directories.
2016-04-27 00:15:40 -07:00
Harshavardhana
90987df9b4 objectapi: Simplify ListMultipart combine recursive and non-recursive. (#1390)
Fixes #1365
2016-04-26 17:57:16 -07:00
Harshavardhana
ad1abc4486 xl-v1/Cleanup: use listOnlineDisks instead of getReadableDisks. (#1389)
Remove usage of getFileVersionQuorumMap, instead use listFileVersions
to get the version list and extract higherVersion.

Fixes #1379
Fixes #1378
Fixes #1377
2016-04-26 13:03:37 -07:00
Krishna Srinivas
4333e529e6 xl/ListFiles: return as many objects as requested. (#1383)
* xl/ListFiles: return as many objects as requested and take care of eof (#1361)

* xl/ListFiles: fix review comments.

* xl/ListFiles: Add windows filepath translation.

* xl/ListFiles: Use slashSeparator instead of "/". Remove filepath.FromSlash() as golang-windows takes care of it automatically.
2016-04-26 10:35:39 -07:00
koolhead17
9685f88b84 Added FreeBSD installation steps with ZFS. (#1388) 2016-04-26 10:17:02 -07:00
Harshavardhana
5f80edf232 routers: Fix a crash while initializing network fs. (#1382)
Crash happens when 'minio server filename' a file name is
provided instead of a directory on command line argument.

```
panic: runtime error: slice bounds out of range

goroutine 1 [running]:
panic(0x5eb460, 0xc82000e0b0)
	/usr/local/opt/go/libexec/src/runtime/panic.go:464 +0x3e6
main.splitNetPath(0x7fff5fbff9bd, 0x7, 0x0, 0x0, 0x0, 0x0)
	/Users/harsha/mygo/src/github.com/minio/minio/network-fs.go:49 +0xb7
main.newNetworkFS(0x7fff5fbff9bd, 0x7, 0x0, 0x0, 0x0, 0x0)
	/Users/harsha/mygo/src/github.com/minio/minio/network-fs.go:90 +0x20a
main.configureServerHandler(0xc82024e1c8, 0x5, 0xc8200640e0, 0x1, 0x1, 0x0, 0x0)
	/Users/harsha/mygo/src/github.com/minio/minio/routers.go:43 +0x6ce
main.configureServer(0xc82024e1c8, 0x5, 0xc8200640e0, 0x1, 0x1, 0x5)
	/Users/harsha/mygo/src/github.com/minio/minio/server-main.go:86 +0x67
```
2016-04-25 18:10:40 -07:00
Harshavardhana
42254b5c4d xl: Rename blockingWriteCloser to waitCloser. (#1376) 2016-04-25 16:00:58 -07:00
Harshavardhana
00c697393a Merge pull request #1381 from minio/xl-layer
Implement XL layer
2016-04-25 15:59:30 -07:00
Harshavardhana
55032ffdf9 xl: Simplify blockingWriter and its usage. (#1373)
This removes odd races since we don't need to
track errors and avoids locking. All we need
is a Wait() and Done() waitgroup.
2016-04-25 12:47:31 -07:00
Harshavardhana
8bce699dae xl: Add logging. (#1372) 2016-04-25 12:47:31 -07:00
Harshavardhana
57f35c2bcc xl: Introduce new blocking writer to make CreateFile atomic. (#1362)
Creates a new write closer that must be released
by the read consumer. This is necessary so that
while commiting the underlying writers in erasure
coding we need to make sure we reply success only if
we have committed to disk.

This in turn also fixes plethora of bugs related to
subsequent PutObject() races with namespace locking.

This patch also enables most of the tests, other than
ListObjects paging which has some issues still.

Fixes #1358, #1360
2016-04-25 12:47:31 -07:00
Harshavardhana
cab6805f09 xl: Enable a subset of tests for XL branch. (#1359) 2016-04-25 12:47:31 -07:00
Krishna Srinivas
8c85815106 xl: refactor functions to xl-v1-common.go xl-v1-utils.go. (#1357) 2016-04-25 12:47:31 -07:00
Krishna Srinivas
becc814531 Xl layer selfheal quorum2
* xl/selfheal: selfheal based on read quorum on GET

* xl: getReadableDisks() also returns whether self-heal is needed so that this info can be used by ReadFile/SelfHeal/StatFile.

* xl: trigger selfheal from StatFile.
2016-04-25 12:47:31 -07:00
Harshavardhana
9bd9441107 xl: Simplify reading metadata and add a new fileMetadata type. (#1346) 2016-04-25 12:47:31 -07:00
Harshavardhana
f3784d1087 xl: Handle read quorum for StatVol, ListVols 2016-04-25 12:47:31 -07:00
Harshavardhana
91588209fa obj: Object api handle all errors in common location. (#1343) 2016-04-25 12:47:31 -07:00
Krishna Srinivas
5c33b68318 xl: code refactor, cleanup ReadFile and CreateFile. 2016-04-25 12:47:31 -07:00
Bala FA
45b3d3e21f xl: add quorum support for create file 2016-04-25 12:47:31 -07:00
Harshavardhana
141a44bfbf xl: Fix ReadFile to keep the order always for reading the data back. (#1339)
Also fixes a stackoverflow bug in namespace locking.
2016-04-25 12:47:31 -07:00
Harshavardhana
c7bf471c9e list/xl: Fix the way marker is handled in leafDirectory verification. 2016-04-25 12:47:31 -07:00
Krishna Srinivas
c302875774 selfheal: implement self-heal. Heals the missing parts. (#1335) 2016-04-25 12:47:31 -07:00
Harshavardhana
b76f3f1d62 xl: Add more fixes and cleanup.
Simplify cleanup of temporary files during createFile operations.
2016-04-25 12:47:31 -07:00
Bala FA
ada0f82b9a xl: add quorum support for read file and name space locking. (#1333) 2016-04-25 12:47:31 -07:00
Harshavardhana
a98a7fb1ad Implement XL layer - preliminary work. 2016-04-25 12:47:31 -07:00
Harshavardhana
bf8a9702a4 tests: Fix a bug in TestObjectAPIIsUploadIDExists. (#1375)
The following code crashes when upload ID does not
exist, since we are setting err == nil when we find
err == errFileNotFound.

```
if e == nil {
   t.Fatal(e.Error())
```

Fix it.
2016-04-25 12:47:08 -07:00
karthic rao
6e372f83b4 Tests: object api multipart tests and bug fixes. 2016-04-25 10:39:28 -07:00
Harshavardhana
e9fba04b36 logging: Enable logging across storage fs layer. (#1367)
Adds log.Debugf at all the layer - fixes #1074
2016-04-24 00:36:00 -07:00
Harshavardhana
d63d17012d tests: Add API suite tests back for object api. (#1352) 2016-04-21 23:40:01 -07:00
Harshavardhana
444d1f8a65 miniobrowser: Vendorize to new changes in miniobrowser. 2016-04-21 20:35:48 -07:00
karthic rao
560c3bd153 Adding return statement after error response in the lastest commit to verify location constraint (#1348) 2016-04-21 20:08:08 -07:00
Harshavardhana
4cf73caf02 api: Add diskInfo as part of StatVol and ListVols. (#1349)
It is the bucket and volumes which needs to have this
value rather than the DiskInfo API itself. Eventually
this can be extended to show disk usage per
Buckets/Volumes whenever we have that functionality.

For now since buckets/volumes are thinly provisioned
this is the right approach.
2016-04-21 20:07:47 -07:00
Harshavardhana
1284ecc6f2 api: Fix verification of checkLeafDirectory. (#1347)
This fixes a problem where leaf directory has more than 1000
entries, also resulting in listing issues, leading to an infinite
loop.

Fixes #1334
2016-04-21 18:05:26 -07:00
karthic rao
cb1116725b api: verify Location constraint for make bucket. (#1342) 2016-04-20 17:35:38 -07:00
koolhead17
c3d0a3d51e Update README.md
Our community contributor bought this to our attention so we have to add region as well in s3cmd config file.
2016-04-19 10:22:40 -07:00
Harshavardhana
e0f8fed011 object: handle Error responses and handle errDiskFull. (#1331) 2016-04-19 02:42:10 -07:00
Harshavardhana
6bc17a3aea server: Attempt to increase max open files. (#1328) 2016-04-18 22:05:32 -07:00
Dave Henderson
ff9a6b00cc isContainerized should look for /.dockerenv not /.dockerinit
Signed-off-by: Dave Henderson <dhenderson@gmail.com>
2016-04-18 19:26:56 -07:00
matt robinson
af907a35a9 add environment var to explicitly indicate containerized and allow running as root (#1327) 2016-04-18 16:30:11 -07:00
Harshavardhana
b47d722d8e fs: Fix filtering out valid paths from previous #1321 fix (#1323)
Fixes #1324
2016-04-17 12:00:23 -07:00
Harshavardhana
33633fd15d fs: Filter out valid paths and volnames. (#1321) 2016-04-16 19:33:29 -07:00
Harshavardhana
b2ec7da9f9 Merge pull request #1314 from minio/split-fs
Split fs code into storage layer and object layer.
2016-04-16 18:09:31 -07:00
Harshavardhana
be002ac01e fs/object: Fix issues from review comments. 2016-04-16 17:57:14 -07:00
Krishna Srinivas
149c6ca094 listMultipart: bugfixes. (#1318) 2016-04-16 16:25:53 -07:00
Harshavardhana
8457af5708 fs: Add proper volume and path validation. 2016-04-16 16:25:53 -07:00
Krishna Srinivas
caa35f68fa listMultipart: implement support for marker. (#1313) 2016-04-16 16:25:53 -07:00
Harshavardhana
30b0b4deba storage/server/client: Enable storage server, enable client storage. 2016-04-16 16:25:53 -07:00
Krishna Srinivas
01a439f95b refactor: add multipart code to the object layer. 2016-04-16 16:25:53 -07:00
Krishna Srinivas
3c48537f20 refactor: refactor code to separate fs into object-layer and fs layer. (#1305) 2016-04-16 16:25:53 -07:00
karthic rao
188bb92d8a bucket-policy parset tests, and bug fixes (#1317) 2016-04-15 18:23:19 -07:00
Harshavardhana
6b3fc03701 docker: Update docker document 2016-04-15 17:40:56 -07:00
Harshavardhana
8112291d43 Add FreeBSD binary link and make a release 2016-04-14 17:22:47 -07:00
Harshavardhana
93666827f4 release: Add freebsd/amd64 build and remove zip, tgz. (#1316) 2016-04-13 23:34:55 -07:00
GarimaKapoor
ac30bef72a Revised Docker.md (#1311) 2016-04-11 23:59:12 -07:00
Bala FA
bea6f33b08 backend/fs: remove timer channel from scanMultipartDir() (#1310)
Previously scanMultipartDir() returns object info channel and timer
channel where timer channel is used to check whether object info
channel is alive or not.  This causes a race condition that timeout
may occur while object info channel in use.

This patch fixes the issue by removing timer channel and uses object
info channel directly where each object info has End bool field
indicates whether received object info is end or not.
2016-04-11 19:00:08 -07:00
Michael Werle
9fb1c79456 Improved Docker examples (#1308)
- Fixed a bug in the persistent docker command ("server" in place of "export")
- Added example of how to set consistent keys with ephemeral data, particularly useful for testing.
2016-04-11 12:33:34 -07:00
Harshavardhana
6b5699b15f config: console logging should be enabled by default. (#1307) 2016-04-09 14:19:59 -07:00
Harshavardhana
33cd910d3a backend/fs: More cleanup and start using checkBuckeArg. (#1306)
backend/fs: More cleanup and start using checkBucketArg.
2016-04-08 17:13:16 -07:00
Bala FA
6af761c86c enhance multipart functions to use fsDirent (#1304)
* backend/fs: scanMulitpartDir returns directories only for recursive listing

* backend/fs: enhance multipart functions to use fsDirent
2016-04-08 11:46:03 -07:00
Matthew Buckett
bedd867c0b The fs command doesn't work any more, using server
Also there's no /export/data folder which stops minio starting so using /export instead.
2016-04-08 08:53:55 -07:00
Harshavardhana
37330bda98 Merge pull request #1302 from harshavardhana/web-api
web: Change /rpc to /webrpc
2016-04-08 02:04:06 -07:00
Harshavardhana
8603185f2f browser: Add new ui-assets.go 2016-04-08 01:47:30 -07:00
Harshavardhana
fbd02d530d web: Change /rpc to /webrpc 2016-04-08 01:46:46 -07:00
Harshavardhana
bad2f2afbb storage: Add storage interface. 2016-04-07 19:33:06 -07:00
Harshavardhana
b182e94acc signature: Handle presigned payload if set.
Validate payload with incoming content.



Fixes #1288
2016-04-07 03:04:18 -07:00
Anand Babu (AB) Periasamy
4e6c4da518 Update README.md 2016-04-06 20:54:06 -07:00
Donald Guy
e8cd1aad8d accessPolicy: prevent backdoor ListBucket via brute-force 404s, per docs + small fixes
* accessPolicy: copy object should require PutObject

* accessPolicy: cite mpu perms doc only for relevant operations

* accessPolicy: prevent backdoor ListBucket via brute-force 404s, per docs
2016-04-06 18:31:40 -07:00
Donald Guy
8b4a5f07b4 accessPolicy: allow anonymous HEAD for Getable objects
* accessPolicy: allow anonymous HEAD for Getable objects

* accessPolicy: allow anonymous HEAD of Listable Buckets
2016-04-06 16:40:54 -07:00
Harshavardhana
ff4e04d942 atomic/fs: use safe package for atomic writes, even in multipart. 2016-04-06 16:05:30 -07:00
Bala FA
dfba4ff397 doc: add multipart documentation about staging files 2016-04-05 19:56:19 -07:00
Harshavardhana
06e3171076 Merge pull request #1290 from balamurugana/devel
Refactor multipart upload
2016-04-05 18:21:46 -07:00
Bala FA
2b3a118636 Merge pull request #1 from harshavardhana/devel
Fix list objects test and remove all the old unnecessary files.
2016-04-06 06:43:43 +05:30
Harshavardhana
8986a6802a Fix ListMultipartUploads 'mc ls -I' now works properly. 2016-04-05 16:39:02 -07:00
Harshavardhana
3fcc60de91 Move the files and rename some functions.
- Rename dir.go as 'fs-multipart-dir.go'
- Move the push/pop to fs-multipart.go and rename them as save/lookup.
- Rename objectInfo instances in fs-multipart as multipartObjInfo.
2016-04-05 12:26:19 -07:00
Harshavardhana
9632c94e7a Fix list objects test and remove all the old unnecessary files.
- Fix tests for new changes.
- Change Golang err as 'e' for the time being, before we bring in
  probe removal change.
- Remove old structs and temporary files.
2016-04-05 12:07:19 -07:00
Bala.FA
083e4e9479 backend/fs: Refactor multipart upload
This patch modifies multipart upload related functions as below

* New multipart upload call creates file
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.uploadid

* Put object part call creates file
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.PART_NUMBER.MD5SUM_STRING

* Abort multipart call removes all files matching
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.*

* Complete multipart call does
  1. creates a staging file
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.complete.TEMP_NAME
  then renames to
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.complete
  2. rename staging file
  EXPORT_DIR/.minio/BUCKET/PATH/TO/OBJECT/UPLOAD_ID.complete
  to EXPORT_DIR/BUCKET/PATH/TO/OBJECT
2016-04-05 22:22:29 +05:30
Harshavardhana
c69fdf0cf2 listObjects: Cleanup and naming conventions.
- Marker should be escaped outside in handlers.

- Delimiter should be handled outside in handlers.

- Add missing comments and change the function names.

- Handle case of 'maxKeys' when its set to '0', its a valid

  case and should be treated as such.
2016-04-04 19:55:07 -07:00
Krishna Srinivas
85ab1df5a8 listObjects: do not do stat during readdir()
* listObjects: improve response time by not doing stat during readDir() operation.

* listObjects: Add windows support.

* listObjects: Readdir() in batches to conserve memory. Add solaris build.

* listObjects: cleanup code.
2016-04-04 17:27:55 -07:00
Harshavardhana
7623e0f8e8 docker: Fix bug in start.sh arguments. 2016-04-04 11:30:18 -07:00
Anand Babu (AB) Periasamy
9843aa1f7a Merge pull request #1284 from harshavardhana/list
objectAPI: Fix object API interface, remove unnecessary structs.
2016-04-03 21:13:48 -07:00
Harshavardhana
0479d4976b objectAPI: Fix object API interface, remove unnecessary structs.
ObjectAPI changes.
```
ListObjects(bucket, prefix, marker, delimiter string, maxKeys int) (ListObjectsInfo, *probe.Error)
ListMultipartUploads(bucket, objectPrefix, keyMarker, uploadIDMarker, delimiter string, maxUploads int) (ListMultipartsInfo, *probe.Error)
ListObjectParts(bucket, object, uploadID string, partNumberMarker, maxParts int) (ListPartsInfo, *probe.Error)
CompleteMultipartUpload(bucket string, object string, uploadID string, parts []completePart) (ObjectInfo, *probe.Error)
```
2016-04-03 15:25:01 -07:00
Anand Babu (AB) Periasamy
12515eabe2 Merge pull request #1280 from harshavardhana/region
signature: No need to validate region for getBucketLocation and listBuckets
2016-04-03 01:01:05 -07:00
Harshavardhana
a6a4e7e297 signature: No need to validate region for getBucketLocation and listBuckets.
This type of check is added for making sure that we can support
custom regions.

ListBuckets and GetBucketLocation are always "us-east-1" rest
should look for the configured region.

Fixes #1278
2016-04-02 18:42:32 -07:00
Anand Babu (AB) Periasamy
2c793a2ea7 Merge pull request #1282 from harshavardhana/remove-old-code
cleanup: Remove old donut/xl code and erasure implementation.
2016-04-02 18:14:21 -07:00
Anand Babu (AB) Periasamy
2bb262cc56 Merge pull request #1279 from harshavardhana/backend
config: Migrate to the new version. Remove backend details.
2016-04-02 17:59:21 -07:00
Harshavardhana
379e0abf03 cleanup: Remove old donut/xl code and erasure implementation.
This is a change to bring in 'klauspost/reedsolomon' library
in #1270 patch.
2016-04-02 17:30:35 -07:00
Harshavardhana
484ba91b08 config: Migrate to the new version. Remove backend details.
Migrate to new config format v4.
```
{
	"version": "4",
	"credential": {
		"accessKey": "WLGDGYAQYIGI833EV05A",
		"secretKey": "BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF"
	},
	"region": "us-east-1",
	"logger": {
		"console": {
			"enable": true,
			"level": "fatal"
		},
		"file": {
			"enable": false,
			"fileName": "",
			"level": "error"
		},
		"syslog": {
			"enable": false,
			"address": "",
			"level": "debug"
		}
	}
}
```

This patch also updates [minio cli spec](./minio.md)
2016-04-02 17:29:31 -07:00
Harshavardhana
6037fe66e9 minio: Simplify for gosimple tool complaints. 2016-04-02 17:28:54 -07:00
Harshavardhana
ba3a5805c1 vendorize: Add updated ui-assets.go. 2016-04-02 17:27:36 -07:00
Anand Babu (AB) Periasamy
33830bfcae Merge pull request #1273 from harshavardhana/fs-linux
fs: Break fs package to top-level and introduce ObjectAPI interface.
2016-04-01 17:03:23 -07:00
Harshavardhana
efc80343e3 fs: Break fs package to top-level and introduce ObjectAPI interface.
ObjectAPI interface brings in changes needed for XL ObjectAPI layer.

The new interface for any ObjectAPI layer is as below

```
// ObjectAPI interface.
type ObjectAPI interface {
        // Bucket resource API.
        DeleteBucket(bucket string) *probe.Error
        ListBuckets() ([]BucketInfo, *probe.Error)
        MakeBucket(bucket string) *probe.Error
        GetBucketInfo(bucket string) (BucketInfo, *probe.Error)

        // Bucket query API.
        ListObjects(bucket, prefix, marker, delimiter string, maxKeys int) (ListObjectsResult, *probe.Error)
        ListMultipartUploads(bucket string, resources BucketMultipartResourcesMetadata) (BucketMultipartResourcesMetadata, *probe.Error)

        // Object resource API.
        GetObject(bucket, object string, startOffset int64) (io.ReadCloser, *probe.Error)
        GetObjectInfo(bucket, object string) (ObjectInfo, *probe.Error)
        PutObject(bucket string, object string, size int64, data io.Reader, metadata map[string]string) (ObjectInfo, *probe.Error)
        DeleteObject(bucket, object string) *probe.Error

        // Object query API.
        NewMultipartUpload(bucket, object string) (string, *probe.Error)
        PutObjectPart(bucket, object, uploadID string, partID int, size int64, data io.Reader, md5Hex string) (string, *probe.Error)
        ListObjectParts(bucket, object string, resources ObjectResourcesMetadata) (ObjectResourcesMetadata, *probe.Error)
        CompleteMultipartUpload(bucket string, object string, uploadID string, parts []CompletePart) (ObjectInfo, *probe.Error)
        AbortMultipartUpload(bucket, object, uploadID string) *probe.Error
}
```
2016-04-01 15:58:39 -07:00
Harshavardhana
272c5165aa Merge pull request #1272 from krishnasrinivas/get-auth
GetAuth implementation. min/max check for accessKey and secretKey.
2016-04-01 09:38:58 -07:00
Krishna Srinivas
e318925f62 credentials: min/max length check for credentials. 2016-04-01 21:52:39 +05:30
Harshavardhana
2395c42fb5 Merge pull request #1277 from krishnasrinivas/remove-minio-go2
UI-handler: remove minio-go dependancy.
2016-04-01 08:56:16 -07:00
Harshavardhana
9333dc3294 Merge pull request #1204 from hackintoshrao/test-bucket
Test: Changes to TestPutBucket to catch the race
2016-04-01 08:53:34 -07:00
Karthic Rao
30fc970eab Changes to TestPutBucket to catch the race 2016-04-01 15:21:16 +05:30
Krishna Srinivas
331890c4c8 UI-handler: remove minio-go dependancy. 2016-04-01 13:56:32 +05:30
Anand Babu (AB) Periasamy
ae5c65d3c6 Merge pull request #1275 from harshavardhana/signature
error: Signature errors should be returned with APIErrorCode.
2016-03-31 23:37:38 -07:00
Harshavardhana
02ad48466d error: Signature errors should be returned with APIErrorCode.
The reasoning is that we can reply back with wide range of
S3 error responses, which would provide more richer context
to S3 client.

Fixes #1267
2016-03-31 23:28:40 -07:00
Harshavardhana
a84c466a40 Merge pull request #1251 from harshavardhana/release-fixes
release: gz doesn't preserve permissions use tar.gz
2016-03-30 14:34:19 -07:00
Harshavardhana
956142be37 Merge pull request #1271 from krishnasrinivas/set-auth2
UI: implement SetAuth/GenerateAuth handlers for changing credentials.
2016-03-29 09:38:18 -07:00
Krishna Srinivas
5201905ad0 UI: implement SetAuth/GenerateAuth handlers for changing credentials. 2016-03-29 21:08:36 +05:30
Anand Babu (AB) Periasamy
186998ad99 Merge pull request #1266 from harshavardhana/cleanup
routers: Move API and Web routers into their own files.
2016-03-27 14:25:02 -07:00
Harshavardhana
aa8c9bad54 routers: Move API and Web routers into their own files.
This is done to ensure we have a clean way to add new routers such as

  - diskRouter
  - configRouter
  - lockRouter
2016-03-27 13:28:36 -07:00
Harshavardhana
59ee5a547c release: gz doesn't preserve permissions use tar.gz
And fix various other issues with release script.
2016-03-26 23:44:32 -07:00
Harshavardhana
1502e2f29f Merge pull request #1265 from vadmeste/add_fbsd_support
Add simple FreeBSD support, make the minio project compilable
2016-03-26 22:28:07 -07:00
Anand Babu (AB) Periasamy
90a46faf31 Merge pull request #1228 from harshavardhana/signature-cleanup
signature: Move signature outside, use a layered approach for signature verification
2016-03-26 15:46:52 -07:00
Harshavardhana
9dca46e156 signature: Use a layered approach for signature verification.
Signature calculation has now moved out from being a package to
top-level as a layered mechanism.

In case of payload calculation with body, go-routines are initiated
to simultaneously write and calculate shasum. Errors are sent
over the writer so that the lower layer removes the temporary files
properly.
2016-03-26 15:21:05 -07:00
Anis Elleuch
663f24064b Add simple FreeBSD support, make the minio project compilable 2016-03-26 22:39:34 +01:00
Harshavardhana
1b0bc814c4 docker: Fix docker Makefile. 2016-03-24 22:53:13 -07:00
Anand Babu (AB) Periasamy
cd5992c6db Merge pull request #1262 from harshavardhana/docker-file
docker: Fix docker command entry.
2016-03-24 22:44:09 -07:00
Harshavardhana
1ef5ab3c28 docker: Fix docker command entry. 2016-03-24 20:38:36 -07:00
Anand Babu (AB) Periasamy
5bd47861d6 Merge pull request #1261 from harshavardhana/update-message
minio: Server upon start displays a message if update is available.
2016-03-24 20:25:36 -07:00
Harshavardhana
3538c9f598 minio: Server upon start displays a message if update is available.
This code also handles to turn itself off when network is not
available and if request fails. Also prints only when the update
is available.
2016-03-24 20:03:51 -07:00
Anand Babu (AB) Periasamy
24ae5467c8 Merge pull request #1260 from harshavardhana/minio
server: Print a message if not backends are configured.
2016-03-24 17:23:03 -07:00
Harshavardhana
36267eb6e2 server: Print a message if not backends are configured. 2016-03-24 10:47:54 -07:00
Harshavardhana
8255590b3c config/main: set the missing value. 2016-03-24 10:41:42 -07:00
Anand Babu (AB) Periasamy
4f6cf5a6b2 Merge pull request #1123 from harshavardhana/rewrite-v1
config/main: Re-write config files - add to new config v3
2016-03-24 08:52:35 -07:00
Harshavardhana
aaf97ea02c config/main: Re-write config files - add to new config v3
- New config format.

```
{
	"version": "3",
	"address": ":9000",
    "backend": {
          "type": "fs",
          "disk": "/path"
    },
	"credential": {
		"accessKey": "WLGDGYAQYIGI833EV05A",
		"secretKey": "BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF"
	},
	"region": "us-east-1",
	"logger": {
		"file": {
			"enable": false,
			"fileName": "",
			"level": "error"
		},
		"syslog": {
			"enable": false,
			"address": "",
			"level": "debug"
		},
		"console": {
			"enable": true,
			"level": "fatal"
		}
	}
}
```

New command lines in lieu of supporting XL.

Minio initialize filesystem backend.
~~~
$ minio init fs <path>
~~~

Minio initialize XL backend.
~~~
$ minio init xl <url1>...<url16>
~~~

For 'fs' backend it starts the server.
~~~
$ minio server
~~~

For 'xl' backend it waits for servers to join.
~~~
$ minio server
... [PROGRESS BAR] of servers connecting
~~~

Now on other servers execute 'join' and they connect.
~~~
....
minio join <url1> -- from <url2> && minio server
minio join <url1> -- from <url3> && minio server
...
...
minio join <url1> -- from <url16> && minio server
~~~
2016-03-23 19:16:09 -07:00
Harshavardhana
85e50f2bb9 Merge pull request #1258 from hackintoshrao/list-object-prefix-response
api: GetObjectInfo fix
2016-03-23 11:09:22 -07:00
Karthic Rao
c8570edaab Issue of 'mc ls' when prefix is a directory fixed, tests added for GetObjectInfo 2016-03-23 19:42:04 +05:30
Anand Babu (AB) Periasamy
f1161d830f Merge pull request #1257 from harshavardhana/content-md5
routers: Fix order of PostPolicyHandlers and headers.
2016-03-22 20:49:21 -07:00
Anand Babu (AB) Periasamy
4aa098ede9 Merge pull request #1256 from harshavardhana/resources
bucketpolicy: checkBucketPolicy should keep resources in map.
2016-03-22 20:47:48 -07:00
Harshavardhana
76bda0d8f1 routers: Fix order of PostPolicyHandlers and headers. 2016-03-22 17:54:44 -07:00
Harshavardhana
996d2e2a10 bucketpolicy: checkBucketPolicy should keep resources in map.
This is done to make sure to avoid appending duplicates for
resources for each actions.
2016-03-22 17:04:39 -07:00
Harshavardhana
2edf32adfa Merge pull request #1253 from koolhead17/patch-5
Update README.md
2016-03-22 17:01:59 -07:00
Harshavardhana
e3a3283883 Merge pull request #1255 from hackintoshrao/list-object-prefix-response
api: ListObject - Changing to empty response when prefixDir doesn't exist
2016-03-22 17:01:48 -07:00
Karthic Rao
7be79b507b Changing to empty response when prefixDir doesn't exist 2016-03-23 04:46:10 +05:30
Harshavardhana
600a932acb Merge pull request #1254 from hackintoshrao/formatting-fix
Formating: Formating issues fixed
2016-03-22 04:34:57 -07:00
Karthic Rao
ff41c050d5 Formatting issues fixed. 2016-03-22 15:55:29 +05:30
koolhead17
7f993bb5e6 Update README.md
fixed missing spelling,
2016-03-22 13:37:04 +05:30
Harshavardhana
7a97622fed Merge pull request #1252 from koolhead17/patch-4
docs: Add more s3cmd commands
2016-03-22 01:04:35 -07:00
koolhead17
da691dc100 Update README.md
Added more s3cmd commands associated & known to work well with Minio server
2016-03-22 12:39:44 +05:30
Harshavardhana
e2c515b334 Merge pull request #1245 from hackintoshrao/fs-bucket-tests
api: ListObject - tests, benchmark, optimization
2016-03-21 21:51:48 -07:00
Karthic Rao
b55922effe Fix for Istruncated set to true under certain conditions.
Optimizing List Objects by using binary sort to discard entries in cases
where prefix or marker is set.

Adding test coverage to ListObjects.

Adding benchmark to ListObjects.
2016-03-22 10:09:16 +05:30
Anand Babu (AB) Periasamy
407316c631 Merge pull request #1249 from harshavardhana/remove-deadcode
main: Remove all the dead/unused code.
2016-03-21 07:55:07 -07:00
Harshavardhana
902aa05021 main: Remove all the dead/unused code.
This patch removes some dead and unused code.
2016-03-21 01:12:29 -07:00
Anand Babu (AB) Periasamy
d95aac4b36 Merge pull request #1246 from harshavardhana/list-response
api: ListMultipartUploads and ListParts responded more entries.
2016-03-20 00:07:06 -07:00
Harshavardhana
fc72a0362f api: ListMultipartUploads and ListParts responded more entries.
Issue is empty entries were added since allocating an array was
followed by an append. Keep the index and copy the right entries
precisely.

Fixes an issue reported at - https://github.com/minio/mc/issues/1642
2016-03-19 23:52:40 -07:00
Anand Babu (AB) Periasamy
1900d16d22 Merge pull request #1244 from harshavardhana/zip-compress
buildscripts: compress release binaries.
2016-03-18 23:37:52 -07:00
Harshavardhana
41cba3a457 buildscripts: compress release binaries.
Fix update command as well to show compressed files in updates.
2016-03-18 23:30:54 -07:00
Harshavardhana
a90faf5996 Merge pull request #1216 from sreeram-boyapati/choose_arch
buildscripts: Enable user to choose an arch to build
2016-03-18 12:13:03 -07:00
Harshavardhana
e7e2fae156 Merge pull request #1238 from hackintoshrao/maxkeys-zero-fix
[GH-1237] api: Handling maxKeys=0 case with a empty response
2016-03-18 02:57:40 -07:00
Karthic Rao
99af0444b7 Handling maxKeys=0 case with a empty response 2016-03-18 15:16:30 +05:30
Harshavardhana
2357e00317 Fix s3cmd config 2016-03-18 02:07:17 -07:00
Anand Babu (AB) Periasamy
27c75d8be4 Merge pull request #1241 from harshavardhana/print
[GH-1240] main: Print keys after init and full server initialization.
2016-03-17 23:42:46 -07:00
Harshavardhana
72d364cbf2 [GH-1240] main: Print keys after init and full server initialization.
Setting MINIO_ACCESS_KEY and MINIO_SECRET_KEY re-writes the values
in config properly, but the init message is not updated.

Fix it by delay printing keys until everything is properly
initialized.
2016-03-17 20:01:52 -07:00
Harshavardhana
a729d02c31 Merge pull request #1227 from krishnasrinivas/docker-md
docker: simplify docker run instructions.
2016-03-16 18:38:37 -07:00
Harshavardhana
c80c06680d Merge pull request #1235 from krishnasrinivas/handle-eaddrinuse
startup: do not fail for non-EADDRINUSE errors. Fixes #1234
2016-03-16 18:38:27 -07:00
Harshavardhana
facd5a9ffd Merge pull request #1239 from awwalker/verify-headers
api: CopyObject - verify the additional headers before starting to write.
2016-03-16 18:37:56 -07:00
awwalker
34f2c5bcdf verify before writing
merge

verify headers before writing
2016-03-16 18:03:23 -07:00
Krishna Srinivas
fd943c704d docker: simplify docker run instructions. 2016-03-17 01:39:49 +05:30
Krishna Srinivas
e5b411caf4 startup: do not fail for non-EADDRINUSE errors. Fixes #1234 2016-03-17 01:36:08 +05:30
Anand Babu (AB) Periasamy
c784707ed8 Merge pull request #1233 from harshavardhana/bucket-policy
bucketpolicy: Improve bucket policy validation, avoid nested rules.
2016-03-15 18:57:02 -07:00
Harshavardhana
88714e7c8e bucketpolicy: Improve bucket policy validation, avoid nested rules.
Bucket policy validation is more stricter now, to avoid nested
rules. The reason to do this is keep the rules simpler and more
meaningful avoiding conflicts.

This patch implements stricter checks.

Example policy to be generally avoided.
```
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Action": [
				"s3:GetObject",
				"s3:DeleteObject"
			],
			"Effect": "Allow",
			"Principal": {
				"AWS": [
					"*"
				]
			},
			"Resource": [
				"arn:aws:s3:::jarjarbing/*"
			]
		},
		{
			"Action": [
				"s3:GetObject",
				"s3:DeleteObject"
			],
			"Effect": "Deny",
			"Principal": {
				"AWS": [
					"*"
				]
			},
			"Resource": [
				"arn:aws:s3:::jarjarbing/restic/key/*"
			]
		}
	]
}
```
2016-03-15 17:50:23 -07:00
Harshavardhana
2e3e164f16 Update Docker.md 2016-03-15 10:01:58 -07:00
Harshavardhana
15b5f4d7a3 Update Caddy.md 2016-03-14 20:51:16 -07:00
Harshavardhana
83f9a50137 Merge pull request #1229 from brendanashworth/improve-fs-getobject
pkg/fs: optimize GetObject, add benchmark
2016-03-13 14:03:45 -07:00
Brendan Ashworth
583e4ecff6 pkg/fs: optimize GetObject syscalls for common case
In the common case, GetObject is called on a bucket that exists and an
object that exists and is not a directory. It should be optimized for
this case, thus error-related syscalls are pushed back until they are
necessary.

This should not impact performance negatively in the uncommon case, and
instead drops two otherwise unnecessary os.Stat's in the common case.

The race conditions around a proper error being returned were present
beforehand.

It also renames 'err' to 'e'.
2016-03-13 13:56:33 -07:00
Brendan Ashworth
b2257682e4 pkg/fs: add benchmark for GetObject
This commit adds a benchmark for GetObject. It doesn't leverage the I/O
as much because it uses short text for data, just 58 chars.
2016-03-13 11:13:06 -07:00
Harshavardhana
09789de586 Merge branch 'awwalker-new-copy-headers' 2016-03-12 10:56:11 -08:00
awwalker
9a5e3299fc api/object: Add CopyObject to support match/modified copy headers
Adds support for the following request headers:

- x-amz-copy-source-if-match
- x-amz-copy-source-if-none-match
- x-amz-copy-source-if-unmodified-since
- x-amz-copy-source-if-modified-since

Fixes #1176
2016-03-12 10:54:23 -08:00
Harshavardhana
2c09df24ee Merge pull request #1225 from hackintoshrao/fs-bucket-tests
tests: Add tests for ListBuckets
2016-03-12 01:26:56 -08:00
Karthic Rao
53a76439a2 test for GetBucketInfo 2016-03-12 14:31:30 +05:30
Anand Babu (AB) Periasamy
6b0af08885 Merge pull request #1224 from harshavardhana/simplify
cleanup: Remove unecessary packages and tests. Simplify.
2016-03-11 22:58:43 -08:00
Harshavardhana
5282a79eda cleanup: Remove unecessary packages and tests. Simplify. 2016-03-11 19:53:55 -08:00
Anand Babu (AB) Periasamy
3d8b9afa8f Merge pull request #1222 from harshavardhana/cleanup-fix
cleanup: Rename ObjectMetadata as ObjectInfo.
2016-03-11 17:14:22 -08:00
Harshavardhana
52751d81cb cleanup: Rename ObjectMetadata as ObjectInfo.
Fixes #1215
2016-03-11 16:58:08 -08:00
Harshavardhana
c81f4b0228 Merge pull request #1218 from hackintoshrao/better-fs-util-test
Test: Better structuring of fs-utils test
2016-03-11 06:54:43 -08:00
Karthic Rao
ec8c1d4ef6 Better structuring of fs-utils test 2016-03-11 19:19:47 +05:30
Sreeram Boyapati
62bd44f873 buildscripts: Enable user to choose an arch to build
- Building minio for all architectures takes a lot of time.
    Choose the one user needs
2016-03-11 10:17:56 +05:30
Anand Babu (AB) Periasamy
b5c77b641d Merge pull request #1209 from harshavardhana/err-naming
error: Add proper prefixes for s3Error codes.
2016-03-10 18:59:16 -08:00
Harshavardhana
fdf3d64793 error: Add proper prefixes for s3Error codes.
This patch adds 'Err' prefix for all standard API
error codes and also adds a proper type for them.
2016-03-10 18:38:46 -08:00
Anand Babu (AB) Periasamy
373d335d94 Merge pull request #1214 from brendanashworth/improve-listbuckets
ListBuckets test & improvement, IsValid{Bucket,Object}Name fix, test, docs
2016-03-10 18:20:18 -08:00
Anand Babu (AB) Periasamy
b16025abf4 Merge pull request #1213 from harshavardhana/presigned
auth: Detect anonymous as the last resort.
2016-03-10 18:11:43 -08:00
Harshavardhana
166ef09c3d auth: Detect anonymous as the last resort. 2016-03-10 17:55:36 -08:00
Harshavardhana
e54aa10201 Merge pull request #1212 from harshavardhana/venodr
vendor: Update ui-assets with new changes and release.
2016-03-10 17:55:25 -08:00
Harshavardhana
5606232567 vendor: Update ui-assets with new changes and release. 2016-03-10 17:36:32 -08:00
Harshavardhana
9352cb87c6 Merge pull request #1211 from harshavardhana/vendorize
vendor: Add minio-go vendor updates.
2016-03-10 15:50:14 -08:00
Harshavardhana
e781959d5b vendor: Add minio-go vendor updates. 2016-03-10 14:33:15 -08:00
Harshavardhana
af295f3600 Merge pull request #1186 from balamurugana/devel
api: refactor list object handling in fs backend
2016-03-10 13:43:49 -08:00
Bala.FA
c70bc2209e api: refactor list object handling in fs backend
When list object is invoked, it creates a goroutine if not available
for given parameters else uses existing goroutine.  These goroutines
are alive for 15 seconds for further continuation list object request
else they exit.

Fixes #1076
2016-03-11 02:20:51 +05:30
Harshavardhana
5cb546d288 Merge pull request #1210 from krishnasrinivas/port-check-fix
startup: specify the network - tcp4/tcp6 for ListenTCP()
2016-03-10 08:54:02 -08:00
Krishna Srinivas
010e775b17 startup: specify the network - tcp4/tcp6 for ListenTCP() 2016-03-10 17:17:47 +05:30
Harshavardhana
2cba605514 Merge pull request #1206 from fwessels/notice-typo
Fix typo
2016-03-09 09:25:24 -08:00
Harshavardhana
d740d3aaae Merge pull request #1208 from krishnasrinivas/port-check
startup: do not start minio server if port is not free. Fixes #1207
2016-03-09 09:25:05 -08:00
Anand Babu (AB) Periasamy
5ac4afa4d1 Merge pull request #1080 from harshavardhana/bucket-policy
accessPolicy: Implement Put, Get, Delete access policy.
2016-03-09 07:11:39 -08:00
Krishna Srinivas
ea7ea427ca startup: do not start minio server if port is not free. Fixes #1207 2016-03-09 19:39:40 +05:30
frankw
027f7efbdb Fix typo 2016-03-09 13:55:58 +01:00
Harshavardhana
d5057b3c51 accessPolicy: Implement Put, Get, Delete access policy.
This patch implements Get,Put,Delete bucket policies

Supporting - http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html

Currently supports following actions.

   "*":                             true,
   "s3:*":                          true,
   "s3:GetObject":                  true,
   "s3:ListBucket":                 true,
   "s3:PutObject":                  true,
   "s3:CreateBucket":               true,
   "s3:GetBucketLocation":          true,
   "s3:DeleteBucket":               true,
   "s3:DeleteObject":               true,
   "s3:AbortMultipartUpload":       true,
   "s3:ListBucketMultipartUploads": true,
   "s3:ListMultipartUploadParts":   true,

following conditions for "StringEquals" and "StringNotEquals"

   "s3:prefix", "s3:max-keys"
2016-03-08 17:44:50 -08:00
Harshavardhana
846410c563 Merge pull request #1205 from harshavardhana/patch-3
Fix docker documentation.
2016-03-08 11:45:45 -08:00
Harshavardhana
f268762ec7 Fix docker documentation. 2016-03-08 11:23:24 -08:00
Atul Jha
41739e0913 Fix doc as per community member suggestion.
Added suggestion mentioned by our community member - fixes #1116
2016-03-08 10:42:56 -08:00
Brendan Ashworth
cd3eb63c4a pkg/fs: test, document, and fix IsValid{Bucket,Object}Name
This commit improves the docs for both functions (more Go-like) and
drops an unnecessary condition in IsValidBucketName. This also drops a
condition in IsValidObjectName where "" (empty string) was a valid
object name. This has been fixed and will no longer return true.

This commit also adds tests for both functions, including a regression
test for the bug fix.
2016-03-07 19:59:24 -08:00
Brendan Ashworth
a5d0bef4e2 pkg/fs: test, bench, and drop unnecessary check in ListBuckets
There is now a simple test and a benchmark for ListBuckets. I also
dropped an unnecessary check that was simply repeated from above,
guaranteed to be true.
2016-03-07 19:58:33 -08:00
Harshavardhana
114f9de5eb Merge pull request #1201 from harshavardhana/time
handlers: Cleanup time handler helpers.
2016-03-07 12:05:26 -08:00
Harshavardhana
761cb2c740 handlers: Cleanup time handlers helpers. 2016-03-07 10:47:45 -08:00
Harshavardhana
59fbb6d081 Merge pull request #1199 from brendanashworth/improvements-fs
pkg/fs improvements
2016-03-07 00:31:57 -08:00
Brendan Ashworth
fab45aae40 pkg/fs: add bucket test and benchmarks
Lots of useful benchmarks and a simple test addition!
2016-03-07 00:07:11 -08:00
Brendan Ashworth
7399d8ceaa pkg/fs: skip unnecessary os.Stat system call 2016-03-07 00:07:11 -08:00
Brendan Ashworth
0a0451a0fb pkg/fs: DRY SetBucketMetadata
It had a lot of code that was the same as GetBucketMetadata, so instead
call GBM from SBM so as to reduce doing the same thing in two different
spots. Theoretically this will induce a small overhead as now at least
two calls of denormalizeBucket are made, although this shouldn't be
noticeable.
2016-03-07 00:07:11 -08:00
Brendan Ashworth
294ea814bf pkg/fs: for locks, prefer defer and read-only ops
This commit prefers the use of 'defer' for fs.Unlock (and fs.RUnlock)
because it is more idiomatic Go and reduces repetition in the code,
lending to a cleaner code base.

It also switches a few uses of the lock to read-only locks, which should
improve performance of those functions dramatically in certain contexts.
2016-03-07 00:07:11 -08:00
Anand Babu (AB) Periasamy
d54a8a9c07 Merge pull request #1195 from harshavardhana/delete-objects
api: Implement multiple objects Delete api - fixes #956
2016-03-06 18:52:39 -08:00
Harshavardhana
aed62788d9 api: Implement multiple objects Delete api - fixes #956
This API takes input XML input in following form.

```
<?xml version="1.0" encoding="UTF-8"?>
<Delete>
    <Quiet>true</Quiet>
    <Object>
         <Key>Key</Key>
    </Object>
    <Object>
         <Key>Key</Key>
    </Object>
    ...
</Delete>
```

and responds the list of successful deletes, list of errors
for all the deleted objects.

```
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Deleted>
    <Key>sample1.txt</Key>
  </Deleted>
  <Error>
    <Key>sample2.txt</Key>
    <Code>AccessDenied</Code>
    <Message>Access Denied</Message>
  </Error>
</DeleteResult>
```
2016-03-06 18:31:50 -08:00
Harshavardhana
6f842124ad Merge pull request #1197 from brendanashworth/improve-DRY-1
api: DRY code and add new test
2016-03-06 16:30:48 -08:00
Brendan Ashworth
adf74ffdb0 api: DRY code and add new test
This commit makes code cleaner and reduces the repetitions in the code
base. Specifically, it reduces the clutter in setObjectHeaders. It also
merges encodeSuccessResponse and encodeErrorResponse together because
they served no purpose differently. Finally, it adds a simple test for
generateRequestID.
2016-03-06 13:26:27 -08:00
Harshavardhana
e883583804 Merge pull request #1192 from harshavardhana/color
console: Fix console color printing on windows.
2016-03-04 13:50:49 -08:00
Harshavardhana
164dfe2ec9 console: Fix console color printing on windows.
Print colored accessKeyID and secretAccessKey are unreadable on windows
on command prompts and powershell.

Use the console package from minio client.
2016-03-04 10:07:19 -08:00
Harshavardhana
91800cff53 Merge pull request #1189 from krishnasrinivas/assets-gzip-encoding
browser-assets: serve asset files with compression.
2016-03-04 01:12:11 -08:00
Krishna Srinivas
44b2037667 browser-assets: serve asset files with compression. 2016-03-04 07:47:26 +05:30
Harshavardhana
8bf882e5c0 Merge pull request #1188 from harshavardhana/vendorize
browser: vendorize to new browser update
2016-03-03 18:04:09 -08:00
Harshavardhana
95d3ecb9ce browser: vendorize to new browser update 2016-03-03 17:46:09 -08:00
Harshavardhana
97472e76b0 Merge pull request #1187 from harshavardhana/support-newfiles
routers: Support special asset files.
2016-03-03 17:25:25 -08:00
Harshavardhana
df0bce374c routers: Support special asset files.
- loader.css
 - logo.svg
 - {firefox,chrome,safari}.png
2016-03-03 17:13:15 -08:00
Harshavardhana
356b889e66 Merge pull request #1184 from harshavardhana/multipart
multipart: remove proper MD5, rather create MD5 based on parts to be s3 compatible.
2016-03-02 18:42:24 -08:00
Harshavardhana
f111997184 multipart: remove proper MD5, rather create MD5 based on parts to be s3 compatible.
This increases the performance phenominally.
2016-03-02 14:20:49 -08:00
Harshavardhana
5f2cfdfbe2 Merge pull request #1185 from harshavardhana/fix-signature
signature: Fix signature handling of parallel requests.
2016-03-02 14:20:18 -08:00
Harshavardhana
17d145df3a signature: Fix signature handling of parallel requests.
Signature struct should be immutable, this fixes an issue
with AWS cli not being able to do multipart put operations.
2016-03-02 11:49:50 -08:00
GarimaKapoor
b37fbabe7f Update README.md 2016-03-02 11:07:23 -08:00
Harshavardhana
d7767d6431 Merge pull request #1181 from hackintoshrao/vet-shadow-fix
build: Removing error initialization to solve variable shadowing
2016-03-01 21:26:15 -08:00
Karthic Rao
6651f5b368 go vet shadow error patch 2016-03-02 09:55:00 +05:30
Harshavardhana
dc622674e1 Merge pull request #1178 from harshavardhana/list-objects
list: Fix handling of maxKeys and prefixes.
2016-03-01 17:55:19 -08:00
Harshavardhana
c7021f6a95 list: Fix handling of maxKeys and prefixes.
This fixes a problem of requeuing the same request
and also fixes a major problem of sending truncated
for full key prefixes.

Fixes #1177
2016-03-01 17:34:31 -08:00
Harshavardhana
53ca192fe7 Merge pull request #1175 from harshavardhana/get-object
api: Implement support for additional request headers.
2016-02-28 19:42:40 -08:00
Harshavardhana
ee1b86e517 api: Implement support for additional request headers.
Now GetObject and HeadObject both support

  - If-Modified-Since, If-Unmodified-Since
  - If-Match, If-None-Match

request headers.

These headers are used to further handle the responses for GetObject
and HeadObject API.

Fixes #1098
2016-02-28 19:34:20 -08:00
Harshavardhana
0b2e449727 Merge pull request #1174 from harshavardhana/copy-object
api: Implement CopyObject s3 API, doing server side copy.
2016-02-27 20:44:56 -08:00
Harshavardhana
3ff8a1b719 api: Implement CopyObject s3 API, doing server side copy.
Fixes #1172
2016-02-27 19:51:59 -08:00
Harshavardhana
2520298734 Merge pull request #1170 from krishnasrinivas/caching
caching: disable caching of index.html and enable caching for other UI asset files.
2016-02-27 00:02:51 -08:00
Krishna Srinivas
af7170675d caching: disable caching of index.html and enable caching for other UI asset files. 2016-02-27 12:22:26 +05:30
Harshavardhana
7780a1ce4c Merge pull request #1171 from harshavardhana/disable-multi
api: Return NotImplemented for MultiDelete and CopyObject APIs
2016-02-26 16:34:37 -08:00
Harshavardhana
ae6e774377 api: Return NotImplemented for MultiDelete and CopyObject APIs 2016-02-26 15:57:30 -08:00
Harshavardhana
2ec211e52a Merge pull request #1169 from minio/harshavardhana-patch-1
Update README.md
2016-02-24 20:30:29 -08:00
Harshavardhana
9122f06307 Update README.md 2016-02-24 19:57:11 -08:00
Harshavardhana
04d6408c31 Merge pull request #1167 from harshavardhana/fix-release-tag
build: Fix release tag.
2016-02-23 17:16:26 -08:00
Harshavardhana
024c00addd build: Fix release tag. 2016-02-23 16:56:41 -08:00
Harshavardhana
2c26b98242 Merge pull request #1166 from harshavardhana/release
build: Add release builds, now generated with 'make release'
2016-02-23 16:02:51 -08:00
Harshavardhana
223245cc45 build: Add release builds, now generated with 'make release'
Currently supported platforms are

    - linux{amd64,arm,386}
    - winows{amd64,386}
    - darwin{amd64}
2016-02-23 15:14:02 -08:00
Harshavardhana
781540081d vendor: Update to new upstream changes from fatih/color
Brings in changes like support for Solaris/FreeBSD.
2016-02-23 15:01:17 -08:00
Harshavardhana
acb44e294c Merge pull request #1140 from harshavardhana/go1.6
build/vet: Fix all the shadowing reports with go1.6
2016-02-23 14:50:46 -08:00
Harshavardhana
408aa72146 build/vet: Fix all the shadowing reports with go1.6
Golang 1.6 is default version for the build now.

Additionally set 'GODEBUG=cgocheck=0' for now, until
we fix the erasure coding package.

Readmore here https://tip.golang.org/doc/go1.6#cgo
2016-02-23 14:34:39 -08:00
Harshavardhana
04424ae9c2 Merge pull request #1163 from harshavardhana/minio-browser
web: Removing dependency for gpg and downloading assets.
2016-02-23 14:16:25 -08:00
Harshavardhana
2181003609 web: Removing dependency for gpg and downloading assets.
Assets are vendorized from now on and updated for each release.
2016-02-23 13:32:12 -08:00
Harshavardhana
997141d031 Merge pull request #1161 from krishnasrinivas/feb-23
UI: serve index.html if the requested file is not found in the assets bundle.
2016-02-23 12:14:52 -08:00
Krishna Srinivas
e509bcb2b9 UI: serve index.html if the requested file is not found in the assets bundle. 2016-02-24 00:36:30 +05:30
Anand Babu (AB) Periasamy
07da31f8b8 Merge pull request #1150 from harshavardhana/signature
signV4: Move pkg/signature to pkg/s3/signature4
2016-02-23 12:39:28 +05:30
Harshavardhana
653ceee9ee signV4: Move pkg/signature to pkg/s3/signature4
Cleanup and move this to relevant path.
2016-02-22 22:47:09 -08:00
Anand Babu (AB) Periasamy
b31dac9162 Merge pull request #1144 from harshavardhana/definitions
cleanup: Remove definitions and move them to its relative places acco…
2016-02-23 01:50:17 +05:30
Harshavardhana
31c941d320 Merge branch 'osallou-feature_credentials_envvars' 2016-02-22 10:49:14 -08:00
Olivier Sallou
678585c513 use environment variables to set and override access and secret keys at server startup 2016-02-22 10:49:02 -08:00
Harshavardhana
800b19d8e5 cleanup: Remove definitions and move them to its relative places accordingly
- Move fs-definitions.go and break them into fs-datatypes.go, fs-bucket-acl.go
  and fs-utils.go
- Move api-definitions.go to api-response.go, where they should be.
- Move web-definitions to its related handlers.
2016-02-22 10:41:27 -08:00
Harshavardhana
bd6850e79f Merge pull request #1155 from harshavardhana/verify-yasm
build: Verify yasm version and complain - fixes #1154
2016-02-22 03:26:03 -08:00
Harshavardhana
18fd0a0f81 build: Verify yasm version and complain - fixes #1154 2016-02-22 03:17:17 -08:00
Harshavardhana
8824b18c29 Merge pull request #1151 from harshavardhana/bucket-fix
router: Fix "/minio" router for web.
2016-02-21 23:05:43 -08:00
Harshavardhana
5da1972d1f router: Fix "/minio" router for web. 2016-02-21 22:59:01 -08:00
Harshavardhana
d3966d1dde Merge pull request #1145 from harshavardhana/bucket
web: Handle private bucket match from prefix to exact match.
2016-02-21 12:14:09 -08:00
Harshavardhana
393636f74b Merge pull request #1149 from alexex/master
Alpine & CA-Certs for SSL support
2016-02-20 02:24:03 -08:00
Harshavardhana
91a7b13529 web: Handle private bucket match from prefix to exact match.
Filter out 'privateBucket' if any from listBuckets output.
2016-02-20 02:22:56 -08:00
Harshavardhana
185d29a899 Merge pull request #1148 from harshavardhana/fix-bug
web: Add targetProto for putObjectURL,getObjectURL SSL requests.
2016-02-20 02:15:56 -08:00
Alexander Jung-Loddenkemper
342f8fbe5d Alpine & CA-Certs for SSL support
Switch to alpine as a base image and install the ca certifcates to enable the minio docker container to run behind a SSL proxy, which will fix #1146
2016-02-20 11:15:19 +01:00
Harshavardhana
4db136c19c web: Add targetProto for putObjectURL,getObjectURL SSL requests.
Currently the default request was based on the 'minio server'
SSL configuration. But inside a proxy this is invalid browser
needs to send which protocol it wishes the PresignedURL
should be generated for.
2016-02-20 01:59:18 -08:00
Harshavardhana
a18620fa86 Merge pull request #1141 from harshavardhana/target-host
web: PresignedGet/Put targetHost should always have a valid value.
2016-02-19 15:53:20 -08:00
Harshavardhana
d4c91426a7 web: PresignedGet/Put targetHost should always have a valid value.
PresignedURLs should always be generated for the targetHost, so that
we can avoid signature issues.
2016-02-19 15:31:38 -08:00
Harshavardhana
51e611a380 Merge pull request #1139 from harshavardhana/docker
docker: Make sure that we properly check for containers.
2016-02-18 13:59:22 -08:00
Harshavardhana
354229732b docker: Make sure that we properly check for containers. 2016-02-18 13:39:44 -08:00
Harshavardhana
1c33926239 Merge pull request #1136 from harshavardhana/add-proxy
presigned: Fix a bug in presigned request verification.
2016-02-18 02:46:04 -08:00
Harshavardhana
91a092792a presigned: Fix a bug in presigned request verification.
Additionally add Docker proxy configuration.
2016-02-18 02:23:12 -08:00
Harshavardhana
d561f0cc0b Merge pull request #1135 from abperiasamy/vendor-update-go-homedir
vendor update for go-homedir
2016-02-18 00:14:58 -08:00
Anand Babu (AB) Periasamy
f53e9dd1b8 vendor update for go-homedir 2016-02-18 13:02:41 +05:30
Harshavardhana
2a6bc604db Merge pull request #1132 from harshavardhana/merge-ports
web/rpc: Merge ports with API server.
2016-02-17 21:22:48 -08:00
Harshavardhana
dd9aaa855c web/rpc: Merge ports with API server.
Fixes #1081 and #1130
2016-02-17 20:28:15 -08:00
Harshavardhana
5a9333a67b signature: Rewrite signature handling and move it into a library. 2016-02-16 17:28:16 -08:00
Harshavardhana
b531bb31bb Merge pull request #1129 from harshavardhana/cpu-klauspost
cpu: Remove pkg/cpu in favor of better klauspost/cpuid.
2016-02-15 14:06:05 -08:00
Harshavardhana
9e10ee7e47 cpu: Remove pkg/cpu in favor of better klauspost/cpuid.
Fixes #1128
2016-02-15 13:50:33 -08:00
Harshavardhana
a173313bc2 Merge pull request #1127 from abperiasamy/master
rewrite minio runtime checks
2016-02-15 10:39:01 -08:00
Anand Babu (AB) Periasamy
bbca70e13b rewrite minio runtime checks 2016-02-15 17:56:56 +05:30
Harshavardhana
e73301944a Merge pull request #1126 from harshavardhana/fix-multipart
fs/multipart: Handle un-ordered creation of multiparts.
2016-02-14 00:49:07 -08:00
Harshavardhana
fbab7128d5 fs/multipart: Handle un-ordered creation of multiparts.
Fixes #1125
2016-02-14 00:39:15 -08:00
Harshavardhana
d38bbf3562 Merge pull request #1124 from harshavardhana/json-rpc-fix
rpc: Fix json rpc to handle array and object request params.
2016-02-13 19:15:52 -08:00
Harshavardhana
e59ceba51b rpc: Fix json rpc to handle array and object request params.
rpc/v2/json2 code has a bug where it treats all jsonrpc 2.0
request params like an 'object'. In accordance with the spec
it could be both 'object' or an 'array'.

Handle both cases.
2016-02-13 19:01:36 -08:00
Harshavardhana
40e8893fb9 Merge pull request #1122 from harshavardhana/osx-build
build: Simplify and build only with Makefiles.
2016-02-13 01:27:15 -08:00
Harshavardhana
ebdbe2db44 build: Simplify and build only with Makefiles.
Configure is not portable.
2016-02-13 01:19:13 -08:00
Harshavardhana
3fd637c1f4 Merge branch 'json-rpc' 2016-02-12 20:32:47 -08:00
Krishna Srinivas
56ffe79f25 jsonrpc: return json2.Error from the json-rpc layer to the client 2016-02-12 20:30:02 -08:00
Krishna Srinivas
6ad39cb386 WebUI: move from rpc/v2/json to rpc/v2/json2 which has better error response structure. 2016-02-12 20:29:56 -08:00
Harshavardhana
f98675660b Merge pull request #1119 from harshavardhana/minio
xl: Moved to minio/minio
2016-02-11 15:54:02 -08:00
Harshavardhana
62f6ffb6db xl: Moved to minio/minio - fixes #1112 2016-02-11 15:43:36 -08:00
Harshavardhana
f0e2edd632 Merge pull request #1120 from krishnasrinivas/version-check
UI: implement rpc call to return UI version
2016-02-11 11:07:57 -08:00
Krishna Srinivas
5e32dec4fb UI: implement rpc call to return UI version 2016-02-12 00:26:27 +05:30
Harshavardhana
33bd97d581 Merge pull request #1118 from harshavardhana/make-file
build: Change UI assets location.
2016-02-10 16:21:58 -08:00
Harshavardhana
70bbf4c8ec build: Change UI assets location. 2016-02-10 16:10:33 -08:00
Harshavardhana
785a1f0eb0 Merge pull request #1117 from harshavardhana/devel
pkg/ioutils: remove usage of os.Lstat() in FTW()
2016-02-10 13:52:16 -08:00
Harshavardhana
6e9d73426b pkg/ioutils: True should be true 2016-02-10 13:33:36 -08:00
Bala.FA
5e4b13f4bd remove unused functions 2016-02-10 13:32:53 -08:00
Bala.FA
255505a83b pkg/ioutils: remove usage of os.Lstat() in FTW()
As os.Readdir() is used get file entries where statinfo is already
present.  This patch fixes to use statinfo provided by os.Readdir().
2016-02-10 13:32:53 -08:00
Harshavardhana
31dbdb1787 Merge pull request #1113 from harshavardhana/list-objects-optimization
listObjects: list objects minor optimization.
2016-02-10 10:33:37 -08:00
Harshavardhana
7c38430710 Merge pull request #1115 from krishnasrinivas/return-error-type
jsonrpc: WrapError() makes jsonrpc return unnecessary details in the error message.
2016-02-10 10:18:12 -08:00
Krishna Srinivas
318265ecaf jsonrpc: WrapError() makes jsonrpc return unnecessary details in the error message. 2016-02-10 21:22:37 +05:30
Harshavardhana
98ee5fcf55 build: Add spelling checks and check if curl is installed. 2016-02-10 00:18:05 -08:00
Harshavardhana
9b29af8bbe listObjects: list objects minor optimization.
Minor optimization.

- Add 1000 entries buffered channel for walkerCh.
- Reset marker after the lexical order has reached and
  compare only if the marker is set.
2016-02-09 21:45:19 -08:00
Harshavardhana
4ef01bc225 Merge pull request #1109 from krishnasrinivas/getuiversion
rpc: Add rpc uiVersion string
2016-02-09 10:29:28 -08:00
Krishna Srinivas
bf6078df13 webrpc: Implement GetUIVersion json-rpc API. 2016-02-09 18:09:32 +05:30
Harshavardhana
4e328d7b2c Merge pull request #1106 from harshavardhana/support
pkg/user: Support 32bit darwin.
2016-02-08 01:59:02 -08:00
Harshavardhana
42fcb27308 pkg/user: Support 32bit darwin in user package. 2016-02-08 01:34:25 -08:00
Harshavardhana
e79a73a3f5 Merge pull request #1105 from harshavardhana/cleanup-assets
build: Cleanup assets file upon make clean.
2016-02-07 11:06:08 -08:00
Harshavardhana
2c6da82788 build: Cleanup assets file upon make clean. 2016-02-07 10:55:51 -08:00
Harshavardhana
012d7ae2c3 Merge pull request #1104 from harshavardhana/response-content
getObject: Add support for special response headers.
2016-02-07 04:03:13 -08:00
Harshavardhana
99fbc0fcb3 getObject: Add support for special response headers.
Supports now response-content-type, response-content-disposition,
response-cache-control, response-expires.
2016-02-07 03:55:16 -08:00
Harshavardhana
de79440de2 Merge pull request #1103 from harshavardhana/min-free-disk
server: Remove max-buckets option and now max buckets is unlimited.
2016-02-06 18:33:04 -08:00
Harshavardhana
f4c8120cf9 server: Remove max-buckets option and now max buckets is unlimited.
minio server max-buckets option removed. min-free-disk option is
now a flag.
2016-02-06 18:25:47 -08:00
Harshavardhana
e7fec22224 Merge pull request #1102 from harshavardhana/fs-multi
multipart: Increase locked critical for CompleteMultipart.
2016-02-06 02:02:41 -08:00
Harshavardhana
4e6e78598f multipart: Increase locked critical for CompleteMultipart. 2016-02-06 01:46:05 -08:00
Harshavardhana
643ef30533 Merge pull request #1101 from harshavardhana/combine
multipart: Multipart session map now is based on uploadID.
2016-02-05 23:38:19 -08:00
Harshavardhana
8df201ef30 multipart: Multipart session map now is based on uploadID.
- Fixes initiating parallel uploads, and configs being quickly
  re-written by another incoming request.
- Parallel uploads work smoothly now and return expected behavior.
2016-02-05 23:32:30 -08:00
Harshavardhana
3f5804f75a Merge pull request #1100 from harshavardhana/multipart-resume
multipart: Multipart resume simplify further.
2016-02-05 17:49:37 -08:00
Harshavardhana
69bd001c8b multipart: Multipart resume simplify further. 2016-02-05 17:40:08 -08:00
Harshavardhana
7f7697ca38 Merge pull request #1099 from harshavardhana/fix-lock
setBucketMetadata: Fix a deadlock
2016-02-05 16:47:05 -08:00
Harshavardhana
8bf1045645 setBucketMetadata: Fix a deadlock. 2016-02-05 15:48:08 -08:00
Harshavardhana
c922dd6fbd Merge pull request #1097 from harshavardhana/mimedb
fs: Use mimedb now for contentType
2016-02-05 15:22:49 -08:00
Harshavardhana
6f80380497 fs: Use mimedb now. 2016-02-05 15:09:23 -08:00
Harshavardhana
35dcccb4cd Merge remote-tracking branch 'abperiasamy/mimedb' into mimedb 2016-02-05 15:03:47 -08:00
Harshavardhana
fca425f156 Merge pull request #1092 from harshavardhana/more-fixes
multipart: Code cleanup
2016-02-05 14:58:43 -08:00
Harshavardhana
a4c005ce30 multipart: Code cleanup
- More locking cleanup. Fix naming convention.
- Simplify concatenation and blocking calls.
2016-02-05 14:42:09 -08:00
Harshavardhana
5b4c73e74d Merge pull request #1096 from krishnasrinivas/web-remove-object
JSONrpc: implement removeObject RPC call
2016-02-05 10:22:24 -08:00
Krishna Srinivas
3a8fff46f9 JSONrpc: implement removeObject RPC call 2016-02-05 19:46:36 +05:30
Bala.FA
d79fcb1800 fix: handle Transfer-Encoding for make bucket
In case of make bucket, there is a chance of Transfer-Encoding is sent
where Content-Length is missing.  This patch fixes the problem by
checking whether Transfer-Encoding: chunked is set along with
Content-Length.
2016-02-05 19:01:39 +05:30
Anand Babu (AB) Periasamy
d8abb36653 contentdb replaced by new mimedb 2016-02-05 03:49:24 -08:00
Harshavardhana
3a55d05eff Merge pull request #1091 from harshavardhana/gpg
build: Add build dependency check for 'gpg'
2016-02-05 03:20:23 -08:00
Harshavardhana
ddc99e3112 build: Add build dependency check for 'gpg' 2016-02-05 03:10:23 -08:00
Harshavardhana
198a92c3c4 Merge pull request #1090 from harshavardhana/multipart
fs: Add granular locking.
2016-02-04 21:46:06 -08:00
Harshavardhana
8557cbc9b7 fs: Add granular locking. 2016-02-04 20:40:58 -08:00
Harshavardhana
f2113d35be Merge pull request #1089 from harshavardhana/docker
docker: Fix docker build.
2016-02-04 19:13:14 -08:00
Harshavardhana
c9d2904e42 docker: Fix docker build. 2016-02-04 18:22:37 -08:00
Harshavardhana
1c75d35c26 Merge pull request #1088 from harshavardhana/enable-ui-assets
ui-assets: Integrate UI assets.
2016-02-04 18:17:27 -08:00
Harshavardhana
a066184bed ui-assets: Integrate UI assets. 2016-02-04 18:07:05 -08:00
Harshavardhana
53a983659e Merge pull request #1084 from krishnasrinivas/go-bindata-assetfs
UI: vendorize github.com/elazarl/go-bindata-assetfs which is needed by ui-assets.go
2016-02-04 18:06:14 -08:00
Harshavardhana
ef47255f5e Merge pull request #1087 from harshavardhana/handlers
handlers: Fix the naming of all handlers.
2016-02-04 15:30:19 -08:00
Harshavardhana
012fbe756b handlers: Fix the naming of all handlers. 2016-02-04 15:02:53 -08:00
Harshavardhana
4d97c042da Merge pull request #1086 from krishnasrinivas/browser-cache
browser-caching: enable browser caching for webUI
2016-02-04 14:41:38 -08:00
Krishna Srinivas
a344e7713a browser-caching: enable browser caching for WebUI 2016-02-05 03:54:05 +05:30
Harshavardhana
09a54f9032 Merge pull request #1085 from harshavardhana/fs-api
fs: Cleanup Golang errors to be called 'e' and probe to be called as …
2016-02-04 14:02:19 -08:00
Harshavardhana
7a3409c309 fs: Cleanup Golang errors to be called 'e' and probe to be called as 'err'
- Replace the ACL checks back, remove them when bucket
  policy is implemented.
- Move FTW (File Tree Walk) into ioutils package.
2016-02-04 13:43:52 -08:00
Krishna Srinivas
d038393156 UI: vendorize github.com/elazarl/go-bindata-assetfs which is needed by ui-assets.go 2016-02-04 16:50:34 +05:30
Harshavardhana
b49f21ec82 Merge pull request #1082 from harshavardhana/about-api
api: Implement About API.
2016-02-03 23:19:40 -08:00
Harshavardhana
e63a982dee api: Implement About API. 2016-02-03 22:46:45 -08:00
Harshavardhana
a1c6e4055b Merge pull request #1073 from harshavardhana/createobject
fs: Fail createObject with appropriate message.
2016-02-03 21:56:44 -08:00
Harshavardhana
835b297ba7 fs: Fail createObject with appropriate message.
Fail createObject() if a file already exists and one attempts
to create a prefix/directory by same name.

Send an approriate error back to the client as 409 Conflict.
2016-02-03 21:49:36 -08:00
Harshavardhana
729e032a50 Merge pull request #1079 from harshavardhana/implement-stat
web: GetObjectURL should check if file exists before generating URL.
2016-02-03 00:15:54 -08:00
Harshavardhana
64b7da4686 web: GetObjectURL should check if file exists before generating URL.
Fixes - https://github.com/minio/miniobrowser/issues/20
2016-02-03 00:00:36 -08:00
Harshavardhana
ff0dd38957 Merge pull request #1078 from harshavardhana/auto-expiry
expiry: Remove auto-expiry.
2016-02-02 23:40:30 -08:00
Harshavardhana
454d71cafa expiry: Remove auto-expiry.
Move the logic outside and use scripting, cronjob to delete files.

Fixes #1019
2016-02-02 19:35:51 -08:00
Harshavardhana
15924a8f05 Merge pull request #1077 from harshavardhana/flags
flags: Remove anonymous, ratelimit, json and web-address flags.
2016-02-02 19:01:52 -08:00
Harshavardhana
df91661ec6 flags: Remove anonymous, ratelimit, json and web-address flags.
- Web address now uses the port + 1 from the API address port directly.
- Remove ratelimiting, ratelimiting will be achieved if necessary through
  iptables.
- Remove json flag, not needed anymore.
- Remove anonymous flag, server will be no more anonymous for play.minio.io
  we will use demo credentials.
2016-02-02 18:37:09 -08:00
Harshavardhana
e39b6caada Merge pull request #1075 from harshavardhana/content-type
statObject: Make sure to lowercase file extensions.
2016-02-02 18:02:59 -08:00
Harshavardhana
81fcbd2a54 statObject: Make sure to lowercase file extensions. 2016-02-02 17:54:59 -08:00
Harshavardhana
b01594ac33 Merge pull request #1072 from harshavardhana/query
vendor: Update minio-go library with fixes for object listing.
2016-02-02 14:56:49 -08:00
Harshavardhana
de9682a4e7 vendor: Update minio-go library with fixes for objectlisting. 2016-02-02 11:59:55 -08:00
Harshavardhana
9ddfa2529c Merge pull request #1070 from harshavardhana/bug-fix
web: ListObjects is delimited, do not send a stat on prefix.
2016-02-01 13:33:23 -08:00
Harshavardhana
b3bde61396 web: ListObjects is delimited, do not send a stat on prefix. 2016-02-01 12:47:46 -08:00
Harshavardhana
9dfce111d9 Merge pull request #1069 from harshavardhana/list-objects
contentType: Reply back proper contentTypes based on the file extension.
2016-02-01 12:35:56 -08:00
Harshavardhana
0aedb67de0 contentType: Reply back proper contentTypes based on the file extension.
Currently the server would set 'application/octet-stream' for all
objects, set this value based on the file extension transparently.

This is useful in case of minio browser to facilitate displaying
proper icons for the different mime data types.
2016-02-01 12:19:58 -08:00
Harshavardhana
23ca11f75b Merge pull request #1068 from harshavardhana/update-doc
Add doc change.
2016-02-01 11:11:39 -08:00
Harshavardhana
d55f72f09a Add doc change. 2016-02-01 02:45:19 -08:00
Harshavardhana
141e8029bf Merge pull request #1067 from harshavardhana/minio-message
server: Add new set of message.
2016-01-30 18:41:46 -08:00
Harshavardhana
a0c753b6eb server: Add new set of message. 2016-01-30 18:33:33 -08:00
Harshavardhana
0784969767 Merge pull request #1065 from harshavardhana/read-me
doc: Update readme.md
2016-01-29 18:15:10 -08:00
Harshavardhana
cce2f41125 doc: Update readme.md 2016-01-29 18:09:34 -08:00
Harshavardhana
25cd37b209 Merge pull request #1063 from harshavardhana/trim-inputs
jwt: Trim username and password strings
2016-01-29 02:18:58 -08:00
Harshavardhana
716fde3b93 Merge pull request #1062 from notnoopci/no-empty-md5-header
Don't set empty ETag values
2016-01-28 20:51:59 -08:00
Harshavardhana
8af8889a36 jwt: Trim username and password strings 2016-01-28 20:48:41 -08:00
Mahmood Ali
43685788ab Don't set empty ETag values
Currently, metadata.Md5 value isn't populated, yet the ETag is set to
`""`, causing AWS Java SDK to fail integrity checks with GetObject
api calls.
2016-01-28 22:57:23 -05:00
Harshavardhana
52f00042b4 Merge pull request #1061 from harshavardhana/order-presign
presign: Verify query params for presign individually
2016-01-28 12:27:13 -08:00
Harshavardhana
2469c9c591 presign: Verify query params for presign individually
Incoming request params in presigned can come in different order
for different implementations. Rather than verifying a full string
we should verify individual params instead.

This patch fixes an incompatibility issue with AWS SDK Java.

Fixes #1059 - Thanks to @notnoopci for reporting this problem.
2016-01-28 12:16:56 -08:00
Harshavardhana
c4588b3cb9 Merge pull request #1060 from krishnasrinivas/cors-fixes
CORS: cors handling should be before auth handling. cors should allow PUT.
2016-01-28 10:49:09 -08:00
Krishna Srinivas
81b255511f CORS: cors handling should be before auth handling. cors should allow PUT. 2016-01-28 22:54:03 +05:30
Harshavardhana
ed69c58490 Merge pull request #1057 from harshavardhana/fix
listObjects: ListObjects should have idempotent behavior.
2016-01-28 03:24:16 -08:00
Harshavardhana
5934a00058 listObjects: ListObjects should have idempotent behavior.
listObjects was returning inconsistent results, i.e missing
entries during recursive and non-recursive listing. This led
to 'mc mirror' copying contents repeatedly consisdering
these files to be missing on the destination.

This patch addresses this problem - fixes #1056
2016-01-28 03:17:40 -08:00
Anand Babu (AB) Periasamy
2d8512465f Merge pull request #1047 from fwessels/master
Updated mc and aws client doc strings
2016-01-27 12:48:08 -08:00
Harshavardhana
be5a865764 Merge pull request #1055 from harshavardhana/update
update: Minio fix update url.
2016-01-27 11:54:18 -08:00
Harshavardhana
0799a0bec5 update: Minio fix update url. 2016-01-27 11:43:26 -08:00
Harshavardhana
dfc84dd451 Merge pull request #1054 from harshavardhana/json-web
jwt: Deprecate RSA usage, use HMAC instead.
2016-01-27 03:48:49 -08:00
Harshavardhana
db387912f2 jwt: Deprecate RSA usage, use HMAC instead.
HMAC is a much simpler implementation, providing the same
benefits as RSA, avoids additional steps and keeps the code
simpler.

This patch also additionally

- Implements PutObjectURL API.
- GetObjectURL, PutObjectURL take TargetHost as another
  argument for generating URL's for proper target destination.
- Adds experimental TLS support for JSON RPC calls.
2016-01-27 03:38:33 -08:00
Harshavardhana
0c96ace8ad Merge pull request #1053 from harshavardhana/infinite-loop
listObjects: Marker should be unescaped before being used internally.
2016-01-26 23:38:57 -08:00
Harshavardhana
9ca3372870 listObjects: Marker should be unescaped before being used internally.
Without this change listObjects() goes into an infinite loop for
files which have special characters i.e "++" encoded with "%2B%2B".

We have to unescape and convert them to their native representation
before being used internally.

Fixes #1052
2016-01-26 23:32:59 -08:00
Harshavardhana
5d87fdb35c Merge pull request #1051 from harshavardhana/fix-multipart
multipart: NewMultipartUpload shouldn't return empty UploadID
2016-01-26 15:15:44 -08:00
Harshavardhana
2e311168ee multipart: NewMultipartUpload shouldn't return empty UploadID
Existing code
```
{
  if os.IsNotExist(e) {
       e = os.MkdirAll(objectDir, 0700)
       if e != nil {
            return "", probe.NewError(e)
       }
  }
  return "", probe.NewError(e)  ---> Error was here.
}
```
For a successful 'MkdirAll' it would still return an empty uploadID,
but the 'error' would be nil. This would succeed the request but
client would fail.

Fix is to re-arrange the logic. Thanks to Alexander Neumann @fd0, for
reporting this problem.
2016-01-26 15:00:34 -08:00
Harshavardhana
e6df2d639c Merge pull request #1050 from fwessels/aws-cli
Updated 'aws s3 ls' example in README.md to include s3://
2016-01-26 14:28:33 -08:00
frankw
dd4c08cd79 Updated 'aws s3 ls' example in README.md to include s3:// (which is apparently needed for other 'aws s3' commands such as 'cp' or 'rm') 2016-01-26 23:09:06 +01:00
frankw
9a0cd354a3 Updated 'aws s3 ls' example in README.md to include s3:// (which is apparently needed for other aws s3 commands such as cp or rm) 2016-01-26 22:55:12 +01:00
frankw
5ad9167673 Merge remote-tracking branch 'upstream/master' 2016-01-26 22:50:10 +01:00
Harshavardhana
68a25aa425 Merge pull request #1039 from harshavardhana/channel-ftw
listObjects: Channel based file tree walk.
2016-01-26 12:41:21 -08:00
Harshavardhana
18375b7794 ioutils: Add tests 2016-01-26 12:34:04 -08:00
Harshavardhana
13feabefd5 diskInfo: Add DiskInfo API 2016-01-26 12:08:45 -08:00
Harshavardhana
1341fb79c3 listBuckets: Bump up the limit of max buckets to 1000. 2016-01-26 11:49:17 -08:00
Harshavardhana
f5d6be158e listObjects: Simplify channel based changes. 2016-01-26 02:19:55 -08:00
frankw
0a5094c73a Added Alias argument in configuring/using mc client for minio server
Both in README.md as well as sample commands when starting minio server
2016-01-26 10:42:21 +01:00
Harshavardhana
682020ef2f listObjects: Channel based changes.
Supports:
 - prefixes
 - marker
2016-01-25 20:39:38 -08:00
Krishna Srinivas
9e18bfa60e listObjects: Channel based ftw - initial implementation. 2016-01-25 18:58:07 -08:00
Harshavardhana
67a70eb6d6 Merge pull request #1046 from harshavardhana/bucket-not-empty
deleteBucket: Directory not empty error on windows is "directory is n…
2016-01-25 18:04:04 -08:00
Harshavardhana
2ec9b16667 deleteBucket: Directory not empty error on windows is "directory is not empty" 2016-01-25 17:58:43 -08:00
Harshavardhana
35d4521ece Merge pull request #1045 from harshavardhana/get-object-cleanups
api: More cleanups at WebAPI.
2016-01-25 17:45:53 -08:00
Harshavardhana
ae2f15c6d0 api: More cleanups at WebAPI.
- Fixes a bug where bucketName was not denormalized.
- Remove unneeded functions from jwt.go
2016-01-25 17:30:08 -08:00
Harshavardhana
f1ea609175 Merge pull request #1042 from harshavardhana/jwt-api
jwt: Add JWT support for minio server.
2016-01-25 16:45:22 -08:00
Harshavardhana
497f13d733 api: Various fixes.
- limit list buckets to limit only 100 buckets, all uppercase buckets
  are now lowercase and work transparently with all calls.
- Change disk.Stat to disk.GetInfo and return back disk.Info{} struct.
- Introduce new ioutils package which implements ReadDirN(path, n),
  ReadDirNamesN(path, n)
2016-01-25 16:08:27 -08:00
Harshavardhana
432a073e6b Add MakeBucket API. 2016-01-24 22:45:22 -08:00
Harshavardhana
3f1c4bb4b0 Bring in the list APIs implemented by Bala <bala@minio.io> 2016-01-24 16:39:48 -08:00
Harshavardhana
0a9496462a jwt: Add JWT support for minio server.
Please read JWT.md before using this feature.
2016-01-22 17:38:05 -08:00
Harshavardhana
d8fa68ff7e Merge pull request #1040 from hackintoshrao/make-file-edit
Minor changes to Makefile
2016-01-20 01:56:59 -08:00
Karthic Rao
b457a61cb2 Minor changes to Makefile to avoid the make failure when GOPATH/bin is not part of PATH 2016-01-20 14:46:12 +05:30
Anand Babu (AB) Periasamy
9cb590d800 Merge pull request #1038 from harshavardhana/shadow
server: Fix shadowing bug reported by go vet on go1.6beta2
2016-01-18 20:56:42 -08:00
Harshavardhana
1aec985c00 server: Fix shadowing bug reported by go vet on go1.6beta2 2016-01-18 18:45:24 -08:00
Harshavardhana
092ed972d0 Merge pull request #1037 from harshavardhana/add-config
serverConfig: Add a new region config entry.
2016-01-17 01:50:30 -08:00
Harshavardhana
cb7b2762f9 serverConfig: Add a new region config entry.
To change default region from 'us-east-1' to 'custom'.
Add a region value in your 'config.json'.

	"version": "2",
	"credentials": {
		"accessKeyId": "****************",
		"secretAccessKey": "***************",
	        "region": "my-region"
	},
2016-01-17 01:39:11 -08:00
Harshavardhana
8a7bf0dde0 Merge pull request #1036 from harshavardhana/docker
build: Do not hardcode docker binary path
2016-01-15 10:42:13 -08:00
Harshavardhana
023f799820 build: Do not hardcode docker binary path
Fixes #1035
2016-01-15 10:36:45 -08:00
Harshavardhana
8ff43086fb Merge pull request #1034 from harshavardhana/handle-cgo
build: Handle builds on env where CGO_ENABLED=0
2016-01-14 18:28:47 -08:00
Harshavardhana
88686dc6e3 build: Handle builds on env where CGO_ENABLED=0
Fixes #1033
2016-01-14 18:19:01 -08:00
Harshavardhana
3cb92bdf9c Merge pull request #1032 from koolhead17/command-modification
Modified command for adding a host to minio.
2016-01-13 10:52:12 -08:00
koolhead17
d4dbd09a9c Modified command for adding a host to minio. 2016-01-13 23:36:42 +05:30
Harshavardhana
a6941c0b44 Merge pull request #1030 from harshavardhana/expect-100
signature: Add aws-cli work-around for now.
2016-01-09 10:53:54 -08:00
Harshavardhana
8cdaf87c8f signature: Add aws-cli work-around for now.
Golang http server strips off 'Expect' header, if the
client sent this as part of signed headers we need to
handle otherwise we would see a signature mismatch.
`aws-cli` sets this as part of signed headers which is
a bad idea since servers trying to implement AWS

Signature version '4' will all encounter this issue.
According to
 http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
Expect header is always of form:

   Expect       =  "Expect" ":" 1#expectation
   expectation  =  "100-continue" | expectation-extension

So it safe to assume that '100-continue' is what would
be sent, for the time being keep this work around.
2016-01-09 10:47:59 -08:00
Harshavardhana
70bd51994c Merge pull request #1026 from harshavardhana/add-doc
Add aws-cli documentation.
2016-01-08 20:01:38 -08:00
Harshavardhana
80d175b34c Add aws-cli documentation. 2016-01-08 19:59:23 -08:00
Harshavardhana
c963fe244d Merge pull request #1022 from koolhead17/patch-1
Update README.md
2016-01-08 10:33:10 -08:00
koolhead17
888111816d Update README.md
minor change, adding '--' with help option as we introduced the same with mc.
2016-01-08 23:47:08 +05:30
Harshavardhana
3bf7685327 Merge pull request #1021 from harshavardhana/transfer-encoding
http: Enable Transfer-Encoding chunked transfer
2016-01-08 00:50:31 -08:00
Harshavardhana
0c6a6dc380 http: Enable Transfer-Encoding chunked transfer
Fixes #1020
2016-01-08 00:47:20 -08:00
Harshavardhana
5e06c15b44 Merge pull request #1017 from koolhead17/GOenv-change
Modified instructions for installing Go.
2015-12-29 23:52:22 -08:00
koolhead17
f13ad3e02e Modifed instructions for installing Go. 2015-12-30 12:23:28 +05:30
Harshavardhana
82212f5fbc Merge pull request #1016 from harshavardhana/fix-signautre
s3cmd: Fix signature issues related to s3cmd.
2015-12-28 18:12:21 -08:00
Harshavardhana
d955ce4123 s3cmd: Fix signature issues related to s3cmd.
Support regions both 'us-east-1' and 'US' (short hand for US Standard)
honored by S3.
2015-12-28 18:05:28 -08:00
Harshavardhana
7228bc9919 Merge pull request #1015 from harshavardhana/location
location: Return a set location properly on complete multipart upload…
2015-12-28 15:30:43 -08:00
Harshavardhana
e7474bed43 location: Return a set location properly on complete multipart upload request. 2015-12-28 15:20:06 -08:00
Harshavardhana
d548316343 Merge pull request #1014 from harshavardhana/fixes
handlers: read ContentLength directly from http.Request
2015-12-27 23:07:51 -08:00
Harshavardhana
2f67559684 handlers: read ContentLength value directly from http.Request.
Do not look for Content-Length in headers and try to convert them into
integer representations use ContentLength field from *http.Request*.

If Content-Length is understood to be as '-1' then treat it as an error
condition, since it could be a malformed body to crash the server.

Fixes #1011
2015-12-27 23:03:32 -08:00
Harshavardhana
7aab7ba946 Merge pull request #1013 from harshavardhana/bucket-location
bucket-location: Implement bucket location response.
2015-12-27 00:50:42 -07:00
Harshavardhana
0345c8fffb bucket-location: Implement bucket location response. 2015-12-27 00:48:11 -07:00
Harshavardhana
155a4110b1 Merge pull request #1010 from harshavardhana/update-url
Fix update URL.
2015-12-22 17:31:41 -08:00
Harshavardhana
7385302578 Fix update URL. 2015-12-22 17:28:47 -08:00
Harshavardhana
a9c634cc56 Merge pull request #1006 from harshavardhana/fix-mc-download-url
minio: Fix mc and minio download URLs.
2015-12-22 08:21:20 -08:00
Harshavardhana
523e75759d minio: Fix mc and minio download URLs.
Fixes #991
2015-12-13 13:57:56 -08:00
Anand Babu (AB) Periasamy
738c543720 Merge pull request #1004 from harshavardhana/version
s3cmd: Handle support for s3cmd.
2015-12-09 17:21:46 -08:00
Harshavardhana
3c71c5c80c s3cmd: Handle support for s3cmd. 2015-12-09 17:16:53 -08:00
Harshavardhana
7187668828 Merge pull request #1002 from harshavardhana/access
accessPolicy: Access policy response is not correct.
2015-12-07 14:59:57 -08:00
Harshavardhana
b07eed56be accessPolicy: Access policy response is not correct. 2015-12-07 14:52:27 -08:00
Harshavardhana
83b2cd9ceb Merge pull request #1001 from harshavardhana/user
pkg/user: Add pending tests.
2015-12-07 14:18:48 -08:00
Harshavardhana
1bfb490f90 pkg/user: Add pending tests. 2015-12-07 14:13:54 -08:00
Harshavardhana
06879b3b44 Merge pull request #1000 from harshavardhana/fs-bucket
bucketName: relax bucket names, now allow numbers as starting charact…
2015-12-07 14:04:26 -08:00
Harshavardhana
4fc161ddb1 bucketName: relax bucket names, now allow numbers as starting characters. 2015-12-07 13:58:55 -08:00
Harshavardhana
1413e761cf Merge pull request #998 from harshavardhana/config-folder
minio: Add config-folder option.
2015-12-07 13:58:33 -08:00
Harshavardhana
836f5204af minio: Add config-folder option.
Fixes #997
2015-12-07 12:34:09 -08:00
Harshavardhana
a97c4ebce3 Merge pull request #999 from harshavardhana/acl
ignore-handlers: Enhance ignore handlers to cater for bucket resource…
2015-12-07 12:24:00 -08:00
Harshavardhana
57430fe183 ignore-handlers: Enhance ignore handlers to cater for bucket resources with or without separators
Fixes an issue which we saw with minio-py
2015-12-07 12:06:27 -08:00
Harshavardhana
16cf9d9055 Merge pull request #996 from minio/harshavardhana-patch-1
Add s3cmd support status
2015-12-06 13:45:25 -08:00
Harshavardhana
aac85334fd Add s3cmd support status
Fixes #987
2015-12-06 13:40:48 -08:00
Harshavardhana
6255f186dd Merge pull request #995 from minio/harshavardhana-patch-1
Update README.md
2015-12-04 16:57:30 -08:00
Harshavardhana
330ceddfd4 Update README.md 2015-12-04 16:26:09 -08:00
Anand Babu (AB) Periasamy
bed528d569 Merge pull request #994 from abperiasamy/contentdb-race
fixes race in Init
2015-12-03 01:11:14 -08:00
Anand Babu (AB) Periasamy
8e8538175b fixes race in Init 2015-12-03 01:08:05 -08:00
Harshavardhana
c327c56a16 Merge pull request #992 from abperiasamy/contentdb
contentdb provides file.ext to content-type lookups.
2015-12-02 17:00:09 -08:00
Anand Babu (AB) Periasamy
25df427383 contentdb file.ext to content-type lookups 2015-12-02 16:55:26 -08:00
Harshavardhana
5b18c787f0 Merge pull request #990 from harshavardhana/update-readme
doc: update README.md
2015-12-02 11:31:59 -08:00
Harshavardhana
9c64a232e2 doc: update README.md 2015-12-02 11:28:41 -08:00
Harshavardhana
df94b4d3ca Merge pull request #989 from harshavardhana/maintainers
doc: Add MAINTAINERS.md
2015-12-02 10:55:30 -08:00
Harshavardhana
661229d7f7 doc: Add MAINTAINERS.md 2015-12-02 10:50:54 -08:00
Harshavardhana
66b3918b04 Merge pull request #988 from harshavardhana/add-release
release: Add release scripts.
2015-12-02 10:46:15 -08:00
Harshavardhana
9722eaf02c release: Add release scripts. 2015-12-02 10:43:41 -08:00
Harshavardhana
f7029fc6f1 Merge pull request #986 from harshavardhana/fix-makefile
makefile: Fix docker image
2015-11-28 12:13:35 -08:00
Harshavardhana
c22eb6d2c5 makefile: Fix docker image 2015-11-28 12:10:13 -08:00
Harshavardhana
62e4d0d74b Merge pull request #985 from minio/harshavardhana-patch-1
Fix docker.md add data volume container
2015-11-28 11:38:18 -08:00
Harshavardhana
1111e53a12 Fix docker.md with data volume container 2015-11-28 11:33:23 -08:00
Harshavardhana
910e270c1c Merge pull request #984 from minio/harshavardhana-patch-1
Update Docker.md
2015-11-27 20:54:24 -08:00
Harshavardhana
10b6a6a74c Update Docker.md 2015-11-27 20:41:41 -08:00
Harshavardhana
f35128eec5 Merge pull request #983 from harshavardhana/improve-update
update: Re-write update command.
2015-11-25 21:23:56 -08:00
Harshavardhana
3f9e1c1e68 update: Re-write update command. 2015-11-25 21:17:57 -08:00
Harshavardhana
b429bd7ceb Merge pull request #982 from harshavardhana/aws-sdk-go
doc: Cleanup aws-sdk-go.md
2015-11-23 13:59:30 -08:00
Harshavardhana
3478c8a9a2 doc: Cleanup aws-sdk-go.md 2015-11-23 13:53:55 -08:00
Harshavardhana
3986e3cf2b Merge pull request #981 from harshavardhana/signature
signature/region: Remove 'milkyway' and use 'us-east-1' as default.
2015-11-23 13:43:02 -08:00
Harshavardhana
7c91a8495f signature/region: Remove 'milkyway' and use 'us-east-1' as default.
Fixes #980
2015-11-23 13:40:23 -08:00
Harshavardhana
fe2cb11e76 Merge pull request #979 from harshavardhana/fix-docs
minio: Fix documentation.
2015-11-23 13:32:20 -08:00
Harshavardhana
b9aa6a03b8 minio: Fix documentation. 2015-11-23 13:29:08 -08:00
Harshavardhana
9206a6ade9 Merge pull request #978 from harshavardhana/fs-fix
fs: Filter out $multiparts properly.
2015-11-22 01:58:43 -08:00
Harshavardhana
a328120e4d fs: Filter out $multiparts properly.
Relax md5 requirement during complete multipart upload - ref #977
2015-11-22 01:49:57 -08:00
Harshavardhana
2392252a7c Merge pull request #975 from harshavardhana/cli
cli: vendorize to new CLI package updates.
2015-11-21 09:03:58 -08:00
Harshavardhana
ab3fd8ea7f cli: vendorize to new CLI package updates.
- Fix a new line issue for minioHelpTemplate.
- Fixes #974
2015-11-21 09:01:18 -08:00
Harshavardhana
ffa22cf31e Merge pull request #973 from harshavardhana/atomic-sync
atomic: do not sync by default, if needed use CloseAndSync()
2015-11-17 23:13:53 -08:00
Harshavardhana
da2a6066c7 atomic: do not sync by default, if needed use CloseAndSync() 2015-11-17 23:04:18 -08:00
Harshavardhana
3bdee8e4f6 Merge pull request #972 from harshavardhana/merge-atomic
fs: use new atomic package - use FileCreateWithPrefix() API
2015-11-17 20:50:35 -08:00
Harshavardhana
35b9f965f1 fs: use new atomic package - use FileCreateWithPrefix() API 2015-11-17 16:32:20 -08:00
Harshavardhana
26f83f108a Merge pull request #971 from harshavardhana/update
doc: update download locations in Readme.md
2015-11-14 02:46:36 -08:00
Harshavardhana
3e4d69be87 doc: update download locations in Readme.md 2015-11-14 02:39:02 -08:00
Harshavardhana
1d123e479e Merge pull request #970 from harshavardhana/update
vendorize: Add new changes for sha256, sha512 for 32bit support.
2015-11-14 00:58:19 -08:00
Harshavardhana
ff161a9943 vendorize: Add new changes for sha256, sha512 for 32bit support. 2015-11-14 00:55:24 -08:00
Harshavardhana
8046b24b97 Merge pull request #969 from harshavardhana/32bit
386: Support minio server on 32bit linux.
2015-11-14 00:35:16 -08:00
Harshavardhana
f8e59e8399 386: Support minio server on 32bit linux. 2015-11-14 00:29:18 -08:00
Harshavardhana
a384ae868d Merge pull request #968 from harshavardhana/update
update: change update command reflecting new changes on dl.minio.io
2015-11-13 20:35:26 -08:00
Harshavardhana
d65f3422a9 update: change update command reflecting new changes on dl.minio.io 2015-11-13 20:31:19 -08:00
Harshavardhana
249c444e88 Merge pull request #967 from harshavardhana/handle-readonly-buckets
acl: Handle readonly buckets properly
2015-11-13 20:10:50 -08:00
Harshavardhana
e1a33deabf acl: Handle readonly buckets properly 2015-11-13 20:07:39 -08:00
Harshavardhana
ff59181527 Merge pull request #966 from harshavardhana/readdir-optimize
fs: Improve upon proper lexical ordering for ListObjects()
2015-11-11 14:56:33 -08:00
Harshavardhana
d668117a99 fs: Improve upon proper lexical ordering for ListObjects()
Handle sorting properly making sure that we treat fs like a
flat namespace.
2015-11-11 14:52:03 -08:00
Harshavardhana
ae52fc682d Merge pull request #965 from harshavardhana/handle-spaces
signature: Handle all corner cases with spaces.
2015-11-10 18:17:44 -08:00
Harshavardhana
b0a89b1f1b signature: Handle all corner cases with spaces. 2015-11-10 18:11:49 -08:00
Harshavardhana
cefb9b1651 Merge pull request #964 from harshavardhana/attempt-windows
build: Attempt to enable windows compilation
2015-11-08 03:44:49 -08:00
Harshavardhana
c67a8cb6e5 build: Attempt to enable windows compilation 2015-11-08 03:40:53 -08:00
Harshavardhana
152c73e1ce Merge pull request #961 from krishnasrinivas/docker-fixes2
docker: second --ldflags was overriding the first --ldflags option
2015-11-07 15:37:43 -08:00
Krishna Srinivas
f77851bee0 docker: second --ldflags was overriding the first --ldflags option 2015-11-07 15:21:01 -08:00
Harshavardhana
03f185db5f Merge pull request #963 from harshavardhana/operations
doc: Add comments for router operations
2015-11-07 14:30:54 -08:00
Harshavardhana
c44754629e doc: Add comments for router operations 2015-11-07 14:15:22 -08:00
Harshavardhana
53a464b5f1 Merge pull request #960 from masu-mi/fix-trailing-slash
modify registerCloudStorageAPI for improving compatibility with S3.
2015-11-07 03:24:46 -08:00
Kanai Masumi
84de2e33c4 Fix: permit trailing slash for compatible with S3.
ex.
s3cmd requests to path:`/<bucket>/` for PutBucket.
2015-11-07 20:22:13 +09:00
Harshavardhana
f00f674b69 Merge pull request #962 from harshavardhana/update-ldflags-windows
build: update LDFLAGS for windows
2015-11-07 00:14:44 -08:00
Harshavardhana
2f98fa0a14 build: update LDFLAGS for windows 2015-11-07 00:12:35 -08:00
Harshavardhana
fe1684d706 Merge pull request #958 from krishnasrinivas/docker-fixes
docker: the docker image will now contain just the static binary
2015-11-06 21:11:13 -08:00
Krishna Srinivas
440bec28d9 docker: the docker image will now contain just the static binary 2015-11-06 20:44:58 -08:00
Harshavardhana
8ef4ec24ca Merge pull request #959 from minio/call-home
Add missing Jobs section
2015-11-06 19:02:08 -08:00
Harshavardhana
7c17035ce2 Update README.md 2015-11-06 18:58:28 -08:00
Harshavardhana
fc155f67d9 Merge pull request #957 from harshavardhana/improve-logging
logger: Improve logger input argument handling and colorize outputs
2015-11-03 22:54:53 -08:00
Harshavardhana
15909e5463 logger: Improve logger input argument handling and colorize outputs 2015-11-03 22:51:42 -08:00
Harshavardhana
02d843ef5d Merge pull request #954 from harshavardhana/minio-commit
minio/probe: Add missing Commit-ID for probe appinfo
2015-11-02 15:15:51 -08:00
Harshavardhana
74b62f1798 minio/probe: Add missing Commit-ID for probe appinfo 2015-11-02 15:05:57 -08:00
Harshavardhana
2601b3c487 Merge pull request #953 from harshavardhana/ldflags-style
build: Versioning now populated through ldflags
2015-11-02 02:43:04 -08:00
Harshavardhana
7845515f36 build: Versioning now populated through ldflags 2015-11-02 02:37:26 -08:00
Harshavardhana
ef04507bc4 Merge pull request #952 from harshavardhana/server-pkg
fs/bucket: Move bucket metadata into buckets.json
2015-11-01 21:27:52 -08:00
Harshavardhana
ab15f56a61 fs/bucket: Move bucket metadata into buckets.json 2015-11-01 21:25:01 -08:00
Harshavardhana
38aa7e1678 Merge pull request #951 from harshavardhana/bucket-delete
Simplify bucket delete - remove only bucket directory, no need to rec…
2015-10-30 16:11:36 -07:00
Harshavardhana
baf66988e9 Simplify bucket delete - remove only bucket directory, no need to recursively traverse 2015-10-30 16:03:18 -07:00
Harshavardhana
7d38967f22 Merge pull request #950 from harshavardhana/enhance-cors
Change default options for cors to handle HEAD and allow all headers
2015-10-30 12:14:21 -07:00
Harshavardhana
bd0436bf98 Change default options for cors to handle HEAD and allow all headers 2015-10-30 11:56:50 -07:00
Harshavardhana
7df81f8707 Merge pull request #949 from krishnasrinivas/rm-empty-dirs
Remove empty directories while removing an oobject
2015-10-30 00:16:27 -07:00
Krishna Srinivas
0010f0ee10 Remove empty directories while removing an oobject 2015-10-29 23:30:16 -07:00
Harshavardhana
ff7fa3637c Merge pull request #948 from minio/harshavardhana-patch-1
Update INSTALLGO.md
2015-10-28 17:16:29 -07:00
Harshavardhana
36854f266d Update INSTALLGO.md 2015-10-28 15:43:45 -07:00
Harshavardhana
679d099b16 Merge pull request #947 from harshavardhana/through-typo
Fix description typo
2015-10-27 11:59:09 -07:00
Harshavardhana
49260abad4 Fix description typo 2015-10-27 11:38:28 -07:00
Anand Babu (AB) Periasamy
d0ff64c3c8 Merge pull request #946 from abperiasamy/build-constants
build time constants
2015-10-26 02:46:16 -07:00
Anand Babu (AB) Periasamy
588833d06f build time constants 2015-10-26 02:41:04 -07:00
Harshavardhana
07839caf9b Merge pull request #945 from harshavardhana/release-tag
Update new changes in probe and add setAppInfo
2015-10-25 11:16:35 -07:00
Harshavardhana
3566d08c52 Update new changes in probe and add setAppInfo 2015-10-25 11:11:29 -07:00
Harshavardhana
9b2cff14bb Merge pull request #944 from abperiasamy/new-probe
updated probe
2015-10-23 20:12:47 -07:00
Anand Babu (AB) Periasamy
8e68591933 updated probe 2015-10-23 20:09:14 -07:00
Anand Babu (AB) Periasamy
e1a992908c Merge pull request #943 from abperiasamy/pkg-update
added pkg-update to update a vendorized package
2015-10-23 19:33:35 -07:00
Anand Babu (AB) Periasamy
a8f75f5cc1 added pkg-update to update a vendorized package 2015-10-23 19:29:44 -07:00
Harshavardhana
19bcbf76f1 Merge pull request #942 from harshavardhana/sort-unique
Leverage sort Interface to provide sortUnique function
2015-10-23 15:58:54 -07:00
Harshavardhana
53adfb38f4 Leverage sort Interface to provide sortUnique function 2015-10-23 15:55:41 -07:00
Harshavardhana
9476e0b374 Merge pull request #941 from harshavardhana/add-fs
Move ListObjects into its own file
2015-10-22 15:42:59 -07:00
Harshavardhana
dbaa4d8643 Move ListObjects into its own file 2015-10-22 15:39:04 -07:00
Harshavardhana
9680ceb54e Merge pull request #940 from harshavardhana/aws-sdk-go
Add documentation for aws-sdk-go
2015-10-22 01:53:54 -07:00
Harshavardhana
fc5a1d7408 Add documentation for aws-sdk-go 2015-10-22 01:50:31 -07:00
Harshavardhana
485da0cb91 Merge pull request #938 from minio/harshavardhana-patch-1
go get -u
2015-10-22 00:27:50 -07:00
Harshavardhana
16f6a7d370 Update README.md 2015-10-22 00:20:34 -07:00
Harshavardhana
daffe943bb Merge pull request #937 from harshavardhana/bucket
Fix all remaining windows path issues.
2015-10-22 00:08:48 -07:00
Harshavardhana
1f66f4869b Fix all remaining windows path issues. 2015-10-22 00:05:10 -07:00
Harshavardhana
8978c19db6 Merge pull request #936 from abperiasamy/description
move description above usage
2015-10-21 22:37:40 -07:00
Anand Babu (AB) Periasamy
b077bb9165 move description above usage 2015-10-21 22:27:20 -07:00
Harshavardhana
dbe3d9489e Merge pull request #935 from harshavardhana/fs-bucket
On windows translate Prefix, Marker and Delimiter for paths
2015-10-21 22:08:52 -07:00
Harshavardhana
afa27b9847 On windows translate Prefix, Marker and Delimiter for paths 2015-10-21 22:00:03 -07:00
Harshavardhana
1157898244 Merge pull request #934 from harshavardhana/osx-build
Rename _linux to _nix for OS X build
2015-10-21 20:33:14 -07:00
Harshavardhana
1184cc5e8c Rename _linux to _nix for OS X build 2015-10-21 20:18:53 -07:00
Harshavardhana
aa043ba1db Merge pull request #933 from harshavardhana/enable-different-logger
Enable all config loggers
2015-10-21 17:40:19 -07:00
Harshavardhana
09e51002ed Enable all config loggers 2015-10-21 17:35:07 -07:00
Harshavardhana
bf644aaba9 Merge pull request #932 from harshavardhana/logger-functionality
Add config command
2015-10-21 16:04:24 -07:00
Harshavardhana
2ccadc2b17 Add config command 2015-10-21 15:41:14 -07:00
Harshavardhana
7966ed46c8 Merge pull request #929 from harshavardhana/file-logger
Add logger command
2015-10-21 00:04:59 -07:00
Harshavardhana
56003fded7 Add logger command - also migrate from old config to newer config 2015-10-21 00:02:16 -07:00
Harshavardhana
e8892a9f3c Merge pull request #931 from harshavardhana/add-arm-support
Add arm support for build scripts
2015-10-20 12:26:41 -07:00
Harshavardhana
32898c72fa Add arm support for build scripts 2015-10-20 12:03:03 -07:00
Harshavardhana
ed9ef861e1 Merge pull request #930 from harshavardhana/fix-portability
Fix portability issues for arm on raspberry pi
2015-10-20 11:39:43 -07:00
Harshavardhana
b74852116a Fix portability issues for arm on raspberry pi 2015-10-20 11:22:00 -07:00
Harshavardhana
27612b9aed Merge pull request #928 from harshavardhana/server-main
Add new logging connectors
2015-10-19 23:14:40 -07:00
Harshavardhana
469a3475b6 Add new logging connectors 2015-10-19 23:11:32 -07:00
Harshavardhana
d20842fc52 Merge pull request #927 from harshavardhana/accesslog-handler
Implement accessLog handler
2015-10-19 13:30:20 -07:00
Harshavardhana
b9ea18b8b8 Implement accessLog handler 2015-10-19 13:07:09 -07:00
Harshavardhana
e9d5ec3d64 Merge pull request #926 from harshavardhana/available-disk-space
Add 5% cumulative reduction in total size of the disk
2015-10-19 10:47:51 -07:00
Harshavardhana
dddb1650de Add 5% cumulative reduction in total size of the disk
This is done due to filesystem holding additional metadata and inode space
which is unaccounted for during min-free-disk calculation.
2015-10-19 10:33:34 -07:00
Harshavardhana
d5e5b8777f Merge pull request #923 from harshavardhana/auto-expiry
Add initial cut of auto expiry of objects
2015-10-19 01:41:30 -07:00
Harshavardhana
179d2d7dac Add initial cut of auto expiry of objects 2015-10-19 01:34:31 -07:00
Harshavardhana
c590acfc38 Merge pull request #925 from harshavardhana/min-free-disk
Implement min-free-disk as a subcommand, deprecate flag
2015-10-19 01:06:26 -07:00
Harshavardhana
c065be656c Implement min-free-disk as a subcommand, deprecate flag 2015-10-19 00:59:20 -07:00
Harshavardhana
bcbd537d63 Merge pull request #922 from harshavardhana/min-free-disk
Implementing min-free-disk
2015-10-18 00:25:54 -07:00
Harshavardhana
5b2fa33bdb Implementing min-free-disk 2015-10-18 00:23:14 -07:00
Harshavardhana
18a6d7ea5d Merge pull request #921 from harshavardhana/disk-fs
Improve disk code to return back disk StatFS{} structure
2015-10-17 20:21:24 -07:00
Harshavardhana
a8a935f5fd Improve disk code to return back disk StatFS{} structure
```
StatFS {
Total int64
Free int64
FSType string
}
```

Provides more information in a cross platform way.
2015-10-17 20:19:26 -07:00
Harshavardhana
4b3961e1df Merge pull request #920 from harshavardhana/free-disk-space
Add disk package
2015-10-17 16:52:59 -07:00
Harshavardhana
aee0845b2e Add disk package
Implements

   - Stat returns total and free disk space supported across platforms
   - Type returns type of the filesystem underneath
2015-10-17 16:48:24 -07:00
Harshavardhana
f490af85e8 Merge pull request #919 from harshavardhana/update-command
Implement update command
2015-10-17 15:08:43 -07:00
Harshavardhana
47f1ffa1f3 Implement update command 2015-10-17 15:04:54 -07:00
Harshavardhana
357f9d7a05 Merge pull request #917 from harshavardhana/fs-separator
Add fs separator
2015-10-17 12:12:07 -07:00
Harshavardhana
1256ca86d0 Add fs separator 2015-10-17 12:05:12 -07:00
Harshavardhana
c2859c1f7d Merge pull request #916 from harshavardhana/fs-utils
If directory already removed, return nil and move on
2015-10-17 00:15:47 -07:00
Harshavardhana
2ec679a089 If directory already removed, return nil and move on 2015-10-17 00:13:46 -07:00
Harshavardhana
c24235df8b Merge pull request #915 from harshavardhana/delete-bucket
Implement delete bucket properly with proper error handlings
2015-10-17 00:05:28 -07:00
Harshavardhana
d534fc5a4f Implement delete bucket properly with proper error handlings 2015-10-17 00:01:12 -07:00
Harshavardhana
7a4ca0b9c2 Merge pull request #914 from harshavardhana/enhance-filtering-further
Enhance listing further, this time handle cases related to common pre…
2015-10-16 23:15:51 -07:00
Harshavardhana
c9af01d807 Enhance listing further, this time handle cases related to common prefixes 2015-10-16 23:11:41 -07:00
Harshavardhana
5fb46cf75c Merge pull request #913 from harshavardhana/http-status
Reply back proper statuses for DeleteBucket/DeleteObject
2015-10-16 20:06:38 -07:00
Harshavardhana
704fa420a3 Reply back proper statuses for DeleteBucket/DeleteObject 2015-10-16 20:03:44 -07:00
Harshavardhana
f825a32b53 Merge pull request #908 from harshavardhana/bucket-acl-support
Implement Bucket ACL support
2015-10-16 19:49:37 -07:00
Harshavardhana
0eb7f078f9 Implement Bucket ACL support 2015-10-16 19:47:30 -07:00
Harshavardhana
8fb45e92f9 Merge pull request #907 from harshavardhana/anonymous
If anonymous mode is set avoid verifying signature at lower level
2015-10-16 14:10:26 -07:00
Harshavardhana
f367ac927f Merge branch 'master' into anonymous 2015-10-16 13:55:53 -07:00
Harshavardhana
9a01026a78 If anonymous mode is set avoid verifying signature at lower level 2015-10-16 13:47:44 -07:00
Harshavardhana
76fbb16ea8 Merge pull request #906 from minio/harshavardhana-patch-1
Update README.md
2015-10-16 13:27:10 -07:00
Harshavardhana
97393fd2a2 Update README.md 2015-10-16 11:56:53 -07:00
Harshavardhana
97cc446af8 Merge pull request #905 from harshavardhana/update-description
Update minio micro services description
2015-10-16 11:45:59 -07:00
Harshavardhana
94b0243341 Update minio micro services description 2015-10-16 11:40:47 -07:00
Harshavardhana
762b798767 Migrate this project to minio micro services code 2015-10-16 11:26:08 -07:00
Harshavardhana
8c4119cbeb Merge pull request #904 from harshavardhana/fix-bugs
Fix bugs in post policy and presigned signature handling
2015-10-14 15:47:55 -07:00
Harshavardhana
2d0cc80646 Fix bugs in post policy and presigned signature handling 2015-10-14 15:45:34 -07:00
Harshavardhana
189188fdf3 Merge pull request #903 from technosophos/fix/902-missing-dot-minio
During auth generation, create directory if it does not exist.
2015-10-12 13:53:38 -07:00
Matt Butcher
ac6abd608f During auth generation, create directory if it does not exist.
Addresses issue #902
2015-10-12 14:46:44 -06:00
Harshavardhana
f1c099af5f Merge pull request #901 from harshavardhana/add-windows
Add windows support for minhttp library
2015-10-11 23:20:50 -07:00
Harshavardhana
3318cba132 Add windows support for minhttp library 2015-10-11 01:08:16 -07:00
Harshavardhana
6df28d062b Merge pull request #900 from harshavardhana/migrate-vendor
Migrate to new govendor
2015-10-10 22:31:40 -07:00
Harshavardhana
886d6ac4a7 Migrate to new govendor 2015-10-10 22:28:56 -07:00
Harshavardhana
2f5fa394ce Merge pull request #899 from harshavardhana/fix-signature-v4-bugs
Fix some bugs in controller rpc
2015-10-10 19:07:28 -07:00
Harshavardhana
7cde4577d0 Fix some bugs in controller rpc 2015-10-10 19:03:59 -07:00
Harshavardhana
05de60a598 Merge pull request #898 from harshavardhana/merge-few-files
Restructure top level files a bit, merge code into common file
2015-10-09 10:12:36 -07:00
Harshavardhana
9b2d38d142 Restructure top level files a bit, merge code into common file 2015-10-09 10:05:49 -07:00
Harshavardhana
f48b699a8e Merge pull request #897 from harshavardhana/add-rpc-signature-handler
Add rpc signature handler
2015-10-08 22:31:45 -07:00
Harshavardhana
7fea9cb550 Add rpc signature handler 2015-10-08 22:28:11 -07:00
Harshavardhana
312af12fd5 Merge pull request #896 from harshavardhana/get-bucket-acl
Implement GetBucketACL - fixes #893
2015-10-08 11:22:18 -07:00
Harshavardhana
11048708bb Implement GetBucketACL - fixes #893 2015-10-08 11:12:44 -07:00
Anand Babu (AB) Periasamy
bf901d3b9a Merge pull request #895 from abperiasamy/tasker
new task model minio server
2015-10-08 02:23:29 -07:00
Anand Babu (AB) Periasamy
b52697e6ad new task model minio server 2015-10-08 02:20:24 -07:00
Harshavardhana
8c6204e35e Merge pull request #894 from harshavardhana/putbucket-acl-handler
Add proper router for handling putBucketACLHandler
2015-10-07 20:39:52 -07:00
Harshavardhana
d18ca4b40d Add proper router for handling putBucketACLHandler 2015-10-07 20:36:47 -07:00
Harshavardhana
e719adec8b Merge pull request #892 from harshavardhana/add-quick-version
Add quick.CheckVersion() to verify config version quickly before unma…
2015-10-07 17:49:16 -07:00
Harshavardhana
a060b158c8 Add quick.CheckVersion() to verify config version quickly before unmarshalling the full struct
This is needed during migration where we would need to verify the underlying version number
in a quick way.
2015-10-07 17:44:33 -07:00
Harshavardhana
c2fdccade4 Merge pull request #891 from harshavardhana/server-signature-v4
Enforce signature v4 tests all the time, server defaults to only auth…
2015-10-07 11:08:31 -07:00
Harshavardhana
ee377c9bff Enforce signature v4 tests all the time, server defaults to only authenticated requests.
All requests must be authenticated to minio server from now on by using keys generated at
``${HOME}/.minio/users.json`` - from ``minio controller`` during its first time run.

Add a new hidden option ``--anonymous`` for running server in unauthenticated mode.
2015-10-07 10:43:27 -07:00
Harshavardhana
00b0f2e0d4 Merge pull request #890 from harshavardhana/donut-erasure
Make erasure Encode and Decode atomic to avoid races
2015-10-06 23:07:56 -07:00
Harshavardhana
ab5ea997ab Make erasure Encode and Decode atomic to avoid races 2015-10-06 23:05:01 -07:00
Harshavardhana
e6d935731a Merge pull request #889 from harshavardhana/canonicalize
Canonicalize all the incoming input values, now PresignedPostPolicy w…
2015-10-06 10:28:34 -07:00
Harshavardhana
1b42398e8b Canonicalize all the incoming input values, now PresignedPostPolicy works with minio-go 2015-10-06 10:21:28 -07:00
Harshavardhana
31007cd0fa Merge pull request #888 from harshavardhana/erasure-simply
Make erasure matrix type not optional choose automatically
2015-10-05 22:41:50 -07:00
Harshavardhana
d5ce2f6944 Make erasure matrix type not optional choose automatically
Remove option of providing Technique and handling errors based on that
choose a matrix type automatically based on number of data blocks.

INTEL recommends on using cauchy for consistent invertible matrices,
while vandermonde is faster we should default to cauchy for large
data blocks.
2015-10-05 22:38:02 -07:00
Harshavardhana
cf0e1a156b Merge pull request #887 from harshavardhana/fix-encoding-bug-donut
Fix encoding bug in donut during encoding phase
2015-10-05 22:15:22 -07:00
Harshavardhana
4ed50a8004 Fix encoding bug in donut during encoding phase
Stream reading needs to check for length parameter being non zero,
after Reading() a predefined set of buffer length an EOF might be returned
with length == 0.

Erasure taking this zeroed data in might wrongly encode it as part of existing
data blocks which leads to errors while decoding even when the other contents
are intact.
2015-10-05 22:12:53 -07:00
Harshavardhana
f9174632bb Merge pull request #885 from harshavardhana/json-output
Add --json output formatter for server
2015-10-05 00:23:19 -07:00
Harshavardhana
f0a8dbecae Add --json output formatter for server 2015-10-05 00:20:49 -07:00
Harshavardhana
cd489b71e2 Merge pull request #884 from harshavardhana/add-controller-command-line
First time mode for controller
2015-10-04 16:44:41 -07:00
Harshavardhana
c4faf47e64 First time mode for controller
- Upon first time invocation ``minio controller`` would create access keys and secret id
- Upon request passing 'keys' arg ``minio controller`` would provide the keys
- Add colorized notification
2015-10-04 16:42:16 -07:00
Harshavardhana
bdd8e5873a Merge pull request #883 from harshavardhana/simplify-signature
Simplify signature handling
2015-10-04 13:19:19 -07:00
Harshavardhana
cfdb29cac0 Simplify signature handling
This change brings a new SignatureHandler where Presigned.
Requests without Payload are handled very early before even
going through the call.

This change simplifies Donut codebase to not have signature related
logic for all API's.

Simplification is still needed for Payload based signature eg. PUT/POST calls
, which are still part of the donut codebase, which will be done subsequently
after donut re-write.
2015-10-04 13:15:33 -07:00
Harshavardhana
3de10f9472 Merge pull request #882 from harshavardhana/remove-http-responses-injson
Remove using HTTP responses in json reply always in application/xml
2015-10-04 01:25:49 -07:00
Harshavardhana
2a9c37ba26 Remove using HTTP responses in json reply always in application/xml 2015-10-04 01:22:50 -07:00
Harshavardhana
6c7543d49b Merge pull request #880 from harshavardhana/implement-presigned-policy
Implement presigned policy
2015-10-04 00:04:28 -07:00
Harshavardhana
c8de5bad2f Implement presigned policy 2015-10-04 00:01:34 -07:00
Harshavardhana
09dc360e06 Merge branch 'vadmeste-print_json_syntax_error_line_number' 2015-10-03 12:26:04 -07:00
Anis ELLEUCH
b5ea05d839 A better way to print prettified json syntax error msg 2015-10-03 12:25:44 -07:00
Harshavardhana
db293aedb7 Merge pull request #881 from technosophos/feature/docker-go151
Change Dockerfile to use smaller distro, non-root user, and Go 1.5.1
2015-10-02 14:43:30 -07:00
Matt Butcher
c486dfbb7b Add non-root minio user.
This adds a minio user and runs minio as that user instead of
as root.
2015-10-02 15:22:23 -06:00
Matt Butcher
37a02670f5 Use ubuntu-debootstrap and Go 1.5.1.
Currently, the Dockerfile is broken because it installs Go 1.5
while the minimum required version is 1.5.1.

Also, switch to a minimval version of Ubuntu (ubuntu-debootstrap)
and reduce the image size by 70M in unneeded dependencies.
2015-10-02 13:38:51 -06:00
Harshavardhana
62e31e7eb0 Merge pull request #879 from harshavardhana/presigned-signature-v4
Implement presigned signature v4 support
2015-10-01 10:21:05 -07:00
Harshavardhana
3b070dee16 Fix an important metadata getObject bug in donut 2015-10-01 10:18:03 -07:00
Harshavardhana
81cc017f91 Implement presigned signature v4 support 2015-10-01 10:17:47 -07:00
Harshavardhana
6012e18123 Merge pull request #878 from harshavardhana/multipart-donut
Reduce memory usage for memory multipart write by doing io.Pipe() str…
2015-09-30 20:56:20 -07:00
Harshavardhana
50750efb52 Reduce memory usage for memory multipart write by doing io.Pipe() streaming copy 2015-09-30 20:53:30 -07:00
Harshavardhana
daa089fb06 Merge pull request #876 from harshavardhana/probe
Probe stringer should avoid frivolous newlines
2015-09-29 10:17:19 -07:00
Harshavardhana
8c7c5df770 Prober stringer should avoid frivolous newlines 2015-09-29 10:13:11 -07:00
Anand Babu (AB) Periasamy
77cca1e648 Update README.md 2015-09-26 15:34:28 -07:00
Anand Babu (AB) Periasamy
a083733ce2 updated jobs section 2015-09-26 15:29:05 -07:00
Harshavardhana
da1293240c Merge pull request #875 from harshavardhana/fetch-donut
Fetch donut stats properly, update assetfs with new changes
2015-09-26 00:44:14 -07:00
Harshavardhana
301ffe60a2 Fetch donut stats properly, update assetfs with new changes 2015-09-26 00:38:25 -07:00
Harshavardhana
83d8de05ce Merge pull request #874 from harshavardhana/update-assetfs
Add new changes to WebUI with implementation of GenerateAuth(), AddServer()
2015-09-25 19:09:24 -07:00
Harshavardhana
5c7c1ade3f Add new changes to WebUI with implementation of GenerateAuth(), AddServer() 2015-09-25 19:06:31 -07:00
Harshavardhana
7ac45ecddc Merge pull request #873 from harshavardhana/controller-assetfs
Add WebUI assetfs initial version
2015-09-25 00:48:06 -07:00
Harshavardhana
3b278b7f67 Add WebUI assetfs initial version 2015-09-25 00:45:23 -07:00
Harshavardhana
43eac57c7e Merge pull request #872 from harshavardhana/fix-decoding-failure
Fix Linux/Mac OS X erasure decoding failure with new Golang version 1…
2015-09-24 21:45:08 -07:00
Harshavardhana
cd52d7a11c Fix Linux/Mac OS X erasure decoding failure with new Golang version 1.5.1
Fixes #871
2015-09-24 21:42:59 -07:00
Harshavardhana
e793431852 Merge pull request #870 from harshavardhana/disable-multipart
Disable multipart for donut backend from being used
2015-09-24 18:56:53 -07:00
Harshavardhana
3785489153 Disable multipart for donut backend from being used
Will enable it later after cleanup
2015-09-24 18:54:30 -07:00
Harshavardhana
18dd7fc346 Merge pull request #867 from krishnasrinivas/discoverservers
Add DiscoverServers and GetControllerNetInfo controller APIs
2015-09-24 10:41:30 -07:00
Krishna Srinivas
5ebbc6bb0e Add DiscoverServers and GetControllerNetInfo controller APIs 2015-09-23 16:06:37 -07:00
Anand Babu (AB) Periasamy
7b934a7c6c Merge pull request #869 from abperiasamy/donut-check
remove mount-point requirement
2015-09-23 12:33:57 -07:00
Anand Babu (AB) Periasamy
8c356d4f5a remove mount-point requirement 2015-09-23 12:16:18 -07:00
Harshavardhana
3a52d9b207 Merge pull request #868 from harshavardhana/fmt
Do not use fmt.Println with formatting strings
2015-09-23 09:31:46 -07:00
Harshavardhana
39c8991e5f Do not use fmt.Println with formatting strings 2015-09-23 09:21:11 -07:00
Harshavardhana
9d61b2d6db Merge pull request #866 from harshavardhana/proxy-request-rebalance-storagestats
Implement a new Donut service on server side
2015-09-22 19:18:26 -07:00
Harshavardhana
cc223b5278 Implement a new Donut service on server side 2015-09-22 19:08:02 -07:00
Harshavardhana
c92145c3df Merge pull request #865 from krishnasrinivas/dummydonutstats
Donut dummy services - StorageStats, RebalaceStats
2015-09-22 16:03:44 -07:00
Krishna Srinivas
2607ab559a Donut dummy services - StorageStats, RebalaceStats 2015-09-22 15:57:22 -07:00
Harshavardhana
099b6fcf92 Merge pull request #864 from krishnasrinivas/serverrep-fix
Json tags to some structs were missing. Fix ServerRep reply on the se…
2015-09-22 01:36:04 -07:00
Krishna Srinivas
ca17408be0 Json tags to some structs were missing. Fix ServerRep reply on the server. 2015-09-22 01:31:20 -07:00
Harshavardhana
3b050a4299 Merge pull request #863 from krishnasrinivas/hosts-string
Change ControllerArgs Hosts array to Host string
2015-09-22 01:19:29 -07:00
Krishna Srinivas
53e87a0790 Change ControllerArgs Hosts array to Host string 2015-09-22 01:15:27 -07:00
Harshavardhana
b45c6020e4 Merge pull request #862 from krishnasrinivas/probe-fix
Fix logrus error message logging
2015-09-21 16:38:25 -07:00
Krishna Srinivas
bf8b8981bf Fix logrus error message logging 2015-09-21 11:48:49 -07:00
Harshavardhana
c3aa35424e Merge pull request #861 from harshavardhana/controller
Add list of servers, for controller args.
2015-09-20 17:47:49 -07:00
Harshavardhana
1f364483e3 Add list of servers, for controller args. 2015-09-20 17:29:25 -07:00
Anand Babu (AB) Periasamy
345521f34d Merge pull request #860 from abperiasamy/rm-gomaxprocs
setting GOMAXPROCS is no longer
2015-09-20 16:08:33 -07:00
Anand Babu (AB) Periasamy
45146cc138 setting GOMAXPROCS is no longer 2015-09-20 16:06:16 -07:00
Harshavardhana
39e2209755 Merge pull request #859 from harshavardhana/atomic-test
Move atomic package to the top and simplify its tests
2015-09-20 14:50:01 -07:00
Harshavardhana
b938e40fb5 Move atomic package to the top and simplify its tests 2015-09-20 13:51:38 -07:00
Harshavardhana
fb84335010 Merge pull request #858 from harshavardhana/rename
Rename files accordingly - consolidating further
2015-09-20 12:51:18 -07:00
Harshavardhana
6df97ef00a Rename files accordingly 2015-09-20 12:44:50 -07:00
Harshavardhana
8d204b38eb Merge pull request #856 from harshavardhana/rpc-controller
Consolidate controller rpc into one single service
2015-09-20 12:32:08 -07:00
Harshavardhana
d1621691b7 Consolidate controller rpc into one single service 2015-09-20 12:27:42 -07:00
Harshavardhana
29673fed76 Merge pull request #855 from harshavardhana/simplification-toplevel-names
Improve code further - this time further simplification of names
2015-09-19 21:24:03 -07:00
Harshavardhana
674631f9d8 Improve code further - this time further simplification of names 2015-09-19 21:21:39 -07:00
Harshavardhana
2721bef8da Add errorIf for all API handlers to print call trace upon errors 2015-09-19 19:55:12 -07:00
Harshavardhana
e510e97f28 Consolidating more codebase and cleanup in server / controller 2015-09-19 19:55:12 -07:00
Harshavardhana
d9328d25e9 Merge pull request #854 from krishnasrinivas/servicehandlers2
Controller Service proxies rpc calls to the corresponding servers
2015-09-19 19:40:20 -07:00
Krishna Srinivas
e600bd6b4f Controller Service proxies rpc calls to the corresponding servers 2015-09-19 19:37:20 -07:00
Harshavardhana
c953b0dab3 Merge pull request #851 from harshavardhana/cleanup
Move all server and controller packages into top-level
2015-09-19 01:09:42 -07:00
Harshavardhana
d54488f144 Move all server and controller packages into top-level 2015-09-19 01:07:42 -07:00
Harshavardhana
d808c3685d Merge pull request #850 from harshavardhana/logger-tests
Add tests for minio top level logger
2015-09-18 23:44:52 -07:00
Harshavardhana
7d8cfa0a77 Add tests for minio top level 2015-09-18 23:41:36 -07:00
Anand Babu (AB) Periasamy
ec0fdf95e5 Merge pull request #849 from abperiasamy/version-format
new version format and some cleanup
2015-09-18 23:35:26 -07:00
Anand Babu (AB) Periasamy
89a86948b5 new version format and some cleanup 2015-09-18 23:33:28 -07:00
Anand Babu (AB) Periasamy
d1f1b7ac31 new version format and some cleanup 2015-09-18 23:27:04 -07:00
Harshavardhana
2141c9f70f Merge pull request #848 from harshavardhana/rpc-tests
Add new rpc tests for Server.Add and Server.List, improve Version.Get…
2015-09-18 17:47:15 -07:00
Harshavardhana
778f8cd222 Add new rpc tests for Server.Add and Server.List, improve Version.Get RPC to provide more details 2015-09-18 17:44:46 -07:00
Harshavardhana
b59d7882ef Merge pull request #847 from harshavardhana/enhance-signature-handler
Enhance signature handler - throw back valid error messages
2015-09-18 15:17:08 -07:00
Harshavardhana
2a15dd5eab Enhance signature handler - throw back valid error messages 2015-09-18 15:14:55 -07:00
Harshavardhana
ac93bbb41d Merge pull request #846 from harshavardhana/new-changes
With new auth config changes, restructure the API code to use the new style
2015-09-18 03:45:03 -07:00
Harshavardhana
6a5e5c1826 With new auth config changes, restructure the API code to use the new style 2015-09-18 03:41:05 -07:00
Harshavardhana
7a61447ce5 Merge pull request #845 from harshavardhana/add-missing
Add missing reply.Name and add possible failure tests
2015-09-18 03:20:54 -07:00
Harshavardhana
b4ce1e8c1d Add missing reply.Name and add possible failure tests 2015-09-18 03:15:19 -07:00
Harshavardhana
6803b64768 Merge pull request #844 from harshavardhana/enhance-auth
Enhance auth JSONRPC, now provides persistent output
2015-09-18 03:05:37 -07:00
Harshavardhana
f8bb85aeb7 Enhance auth JSONRPC, now provides persistent output
Implements

   - Auth.Generate("user")
   - Auth.Fetch("user")
   - Auth.Reset("user")

This patch also adds testing for each of these cases
2015-09-18 03:02:39 -07:00
Anand Babu (AB) Periasamy
8be09ce1cf Merge pull request #843 from abperiasamy/logger
logrus based logger
2015-09-17 23:59:06 -07:00
Anand Babu (AB) Periasamy
1394d8a7a8 logrus based logger 2015-09-17 23:49:30 -07:00
Harshavardhana
51652c38cb Merge pull request #842 from harshavardhana/fix-minio
Fix minio header in accordance with rfc2616.txt
2015-09-17 23:48:08 -07:00
Harshavardhana
4bcd86408b Fix minio header in accordance with rfc2616.txt 2015-09-17 23:46:10 -07:00
Harshavardhana
0079849938 Merge pull request #841 from harshavardhana/tests-travis
Run tests only on travis, local builds just do govet, golint and gofmt
2015-09-17 22:40:12 -07:00
Harshavardhana
bd33ccc3a2 Run tests only on travis, local builds just do govet, golint and gofmt 2015-09-17 22:31:11 -07:00
Harshavardhana
c223427cbf Merge pull request #840 from harshavardhana/version
Version is a package now, will be re-used across codebase.
2015-09-17 20:20:02 -07:00
Harshavardhana
7093a05ab1 Version is a package now, will be re-used across codebase. 2015-09-17 20:17:33 -07:00
Harshavardhana
1c5454e007 Merge pull request #839 from harshavardhana/add-missing-golint
Fix all the golint complaints about newly added changes
2015-09-17 18:58:36 -07:00
Harshavardhana
1887114444 Fix all the golint complaints about newly added changes
Do not use func(this *server), such generic names should not be used
for writing struct methods.
2015-09-17 18:53:42 -07:00
Harshavardhana
9e7e00e270 Merge pull request #838 from harshavardhana/probe-error
Add more documentation for probe
2015-09-17 18:15:34 -07:00
Harshavardhana
03ef6533c8 Add more documentation for probe 2015-09-17 18:10:42 -07:00
Harshavardhana
f4bd7b151c Merge pull request #835 from harshavardhana/commands-fatal
Add trie to verify wrong inputs, and provide meaningful messages
2015-09-17 16:53:22 -07:00
Harshavardhana
77c71bd596 Add trie to verify wrong inputs, and provide meaningful messages 2015-09-17 16:49:08 -07:00
Harshavardhana
d8656c80bc Merge pull request #834 from harshavardhana/version-checks
Add version checks for golang 1.5.1
2015-09-17 16:36:56 -07:00
Harshavardhana
b5246dbd7d Add version checks for golang 1.5.1 2015-09-17 16:34:06 -07:00
Harshavardhana
e821d4d0cc Merge pull request #833 from krishnasrinivas/server-service-2
Add json rpc ServerService - provides only dummy data right now
2015-09-17 15:51:46 -07:00
Harshavardhana
156ebf6a60 Update README.md 2015-09-17 15:50:30 -07:00
Krishna Srinivas
c49407ced4 rename rpc/server.go -> rpc/rpc.go. rpc/server.go will accomodate ServerService 2015-09-17 15:46:51 -07:00
Harshavardhana
f6e3ad889d Merge pull request #832 from minio/harshavardhana-patch-1
Update README.md
2015-09-17 15:18:39 -07:00
Harshavardhana
99ec05be3e Update README.md 2015-09-17 15:16:28 -07:00
Harshavardhana
c7fe91de49 Merge pull request #828 from harshavardhana/controller-consolidation
Consolidate controller, move rpc package into controller - remove dan…
2015-09-15 19:41:21 -07:00
Harshavardhana
3f4b98ca4c Consolidate controller, move rpc package into controller - remove dangling code in pkg/server 2015-09-15 19:38:36 -07:00
Harshavardhana
8d5f6e0b96 Merge pull request #826 from krishnasrinivas/middleware-cleanup
Remove unneeded functions in middleware init
2015-09-15 18:11:45 -07:00
Krishna Srinivas
b1b387b157 Remove unneeded functions in middleware init 2015-09-15 18:09:09 -07:00
Harshavardhana
87a20e3ba3 Merge pull request #827 from harshavardhana/krishnasrinivas-parallel-read2
Read from the disks in parallel during object read
2015-09-15 17:38:13 -07:00
Harshavardhana
45d8898019 Merge branch 'parallel-read2' of https://github.com/krishnasrinivas/minio into krishnasrinivas-parallel-read2
Make few more changes and rebased with current master
2015-09-15 17:33:33 -07:00
Harshavardhana
ba6ddd5924 Merge pull request #825 from harshavardhana/simplify
Simplify version parsing with newVersion() make it more robust
2015-09-10 00:52:31 -07:00
Harshavardhana
83c1c4982a Simplify version parsing with newVersion() make it more robust 2015-09-10 00:48:46 -07:00
Anand Babu (AB) Periasamy
de633b0ff8 Merge pull request #824 from abperiasamy/logrus
vendor package logrus
2015-09-09 21:57:02 -07:00
Anand Babu (AB) Periasamy
93b406c986 vendor package logrus 2015-09-09 21:51:45 -07:00
Anand Babu (AB) Periasamy
28641dc525 Merge pull request #823 from abperiasamy/logrus
logrus logger
2015-09-09 20:35:55 -07:00
Anand Babu (AB) Periasamy
6930e4d668 logrus logger 2015-09-09 20:28:09 -07:00
Harshavardhana
4495c2d8d8 Update INSTALLGO.md 2015-09-09 16:02:13 -07:00
Harshavardhana
83bac2b08f Merge pull request #822 from harshavardhana/verify-runtime
Verify golang runtime for 1.5.1 and above, also verify if runner is a root - disallow it
2015-09-09 15:39:33 -07:00
Harshavardhana
d3f9a9da0d Verify golang runtime for 0.5.1 and above, also verify if runner is a root 2015-09-09 15:37:06 -07:00
Harshavardhana
17f5df689e Merge pull request #821 from harshavardhana/shadow
Avoid shadowing variables and enable checks to avoid them during build
2015-09-09 15:18:21 -07:00
Harshavardhana
1e2c010174 Avoid shadowing variables and enable checks to avoid them during build 2015-09-09 15:14:55 -07:00
Anand Babu (AB) Periasamy
2c877acd48 Merge pull request #820 from abperiasamy/remove-globals
remove debug option
2015-09-09 12:53:52 -07:00
Anand Babu (AB) Periasamy
e387e5578d remove debug option 2015-09-09 12:46:20 -07:00
Harshavardhana
0d4da7767b Update donut-metadata.md 2015-09-08 22:27:20 -07:00
Harshavardhana
96bc95c952 Update donut-metadata.md formatting changes 2015-09-08 22:15:51 -07:00
Harshavardhana
bae43ccbc7 Merge pull request #819 from harshavardhana/donut-metadata
Add donut metadata configs
2015-09-08 21:35:34 -07:00
Harshavardhana
fcd290193d Add donut metadata configs 2015-09-08 21:32:55 -07:00
Harshavardhana
2eb17eab89 Merge pull request #818 from harshavardhana/logging
Disable logging for now
2015-09-05 21:30:51 -07:00
Harshavardhana
b649eff3fb Disable logging for now 2015-09-05 21:28:35 -07:00
Harshavardhana
95cb056fd5 Merge pull request #817 from harshavardhana/probe
Add probe to main
2015-09-05 21:23:43 -07:00
Harshavardhana
593d90c7de Add probe to main 2015-09-05 21:20:54 -07:00
Harshavardhana
fa11112226 Merge pull request #816 from harshavardhana/erasure
Simplify erasure package for OSX
2015-09-05 20:58:03 -07:00
Harshavardhana
d0f945f8e7 Simplify erasure package for OSX 2015-09-05 20:19:43 -07:00
Harshavardhana
90a247b336 Merge pull request #815 from harshavardhana/remove-cors-controller
Remove cors controller
2015-08-31 17:20:37 -07:00
Harshavardhana
d999ce282a Add cors deps 2015-08-31 17:16:27 -07:00
Harshavardhana
afff3f8885 Revert "Enable controller to have CORS"
This reverts commit f39ac24e99.
2015-08-31 17:15:49 -07:00
Harshavardhana
b5907ec806 Merge pull request #813 from krishnasrinivas/server-cors-add
Add CORS support to minio s3 server
2015-08-31 17:06:26 -07:00
Krishna Srinivas
1e82ee1192 Add CORS support to minio s3 server 2015-08-31 16:59:52 -07:00
Harshavardhana
d5d4c7046f Merge pull request #812 from harshavardhana/enable-cors
Enable controller to have CORS
2015-08-31 01:54:53 -07:00
Harshavardhana
f39ac24e99 Enable controller to have CORS 2015-08-31 01:47:05 -07:00
Harshavardhana
dcf0c71ca3 Merge pull request #810 from harshavardhana/restructure
Restructure server code, controller now runs in silo
2015-08-27 17:10:54 -07:00
Harshavardhana
025f95b1d6 Restructure server code, controller now runs in silo 2015-08-27 17:07:32 -07:00
Anand Babu (AB) Periasamy
2f260adc69 Merge pull request #809 from abperiasamy/ret-untrace
return *probe.Error for Untrace() as well.
2015-08-24 03:43:13 -07:00
Anand Babu (AB) Periasamy
c11aa1c892 return *probe.Error for Untrace() as well. 2015-08-24 03:35:24 -07:00
Harshavardhana
526c8b4d76 Merge pull request #808 from harshavardhana/pkg
Add package add and remove commands to Makefile
2015-08-22 22:26:02 -07:00
Harshavardhana
e6a072e0ad Add package add and remove commands to Makefile 2015-08-22 22:23:36 -07:00
Harshavardhana
4d1f38d28c Merge pull request #807 from harshavardhana/migrate
Migrate to golang1.5 release with GO15VENDOREXPERIMENT=1 enabled
2015-08-22 18:37:56 -07:00
Harshavardhana
988d39a5b6 Migrate to golang1.5 release with GO15VENDOREXPERIMENT=1 enabled 2015-08-22 18:35:37 -07:00
Harshavardhana
5721d85c9a Merge pull request #806 from harshavardhana/tests
Tests were running 4 times due to multiple times the TestingT{} was b…
2015-08-20 22:35:37 -07:00
Harshavardhana
0e416ea699 Tests were running 4 times due to multiple times the TestingT{} was being called
Calling multiple times TestingT{} will hook up runner for Suites for that many times
which would lead to repeated running tests.

Fix it by only initializing it once for all the Suites
2015-08-20 22:32:50 -07:00
Harshavardhana
555c946670 Merge pull request #805 from harshavardhana/quick
Add a new quick.Load() function to load directly any config file prov…
2015-08-20 20:35:45 -07:00
Harshavardhana
068d1d1ba9 Add a new quick.Load() function to load directly any config file provided a quick compatible struct{} is also provided 2015-08-20 20:33:49 -07:00
Harshavardhana
2768c9752b Merge pull request #804 from harshavardhana/update-contributors
Update AB's duplicate email address
2015-08-20 16:14:23 -07:00
Harshavardhana
d87d90a6ef Update AB's duplicate email address 2015-08-20 16:10:01 -07:00
Harshavardhana
67f9df0926 Merge pull request #803 from harshavardhana/minio-server-changes
Refactoring minio server command and flags
2015-08-20 13:09:31 -07:00
Harshavardhana
74587886d2 Refactoring minio server command and flags 2015-08-20 13:07:33 -07:00
Anand Babu (AB) Periasamy
8463040cd1 Merge pull request #802 from abperiasamy/probe-reverse
return call stack in reverse
2015-08-19 22:44:29 -07:00
Anand Babu (AB) Periasamy
b49b8cdbe8 return call stack in reverse 2015-08-19 22:40:27 -07:00
Anand Babu (AB) Periasamy
f223c8b36d Merge pull request #801 from abperiasamy/omitjson
skip Env in json printinf if empty
2015-08-19 01:23:59 -07:00
Anand Babu (AB) Periasamy
826202716f skip Env in json printinf if empty 2015-08-19 01:21:28 -07:00
Anand Babu (AB) Periasamy
56c548c133 Merge pull request #800 from abperiasamy/tracepoint-public
make tracePoint public as well
2015-08-19 01:13:37 -07:00
Anand Babu (AB) Periasamy
76c40e075a make tracePoint public as well 2015-08-19 01:02:39 -07:00
Harshavardhana
8789c5a49d Merge pull request #799 from harshavardhana/pr_out_fix_typo_in_probe
Fix typo in probe
2015-08-18 23:52:04 -07:00
Harshavardhana
ac928b5092 Fix typo in probe 2015-08-18 23:42:41 -07:00
Anand Babu (AB) Periasamy
076354056b Merge pull request #798 from abperiasamy/probe-simplification
simplify probe APIs
2015-08-18 19:34:04 -07:00
Anand Babu (AB) Periasamy
cdf93e534c simplify probe APIs 2015-08-18 19:30:17 -07:00
Harshavardhana
76d11ed1e4 Merge pull request #797 from harshavardhana/migrate_quick
Migrate pkg/quick from mc
2015-08-13 16:35:03 -07:00
Harshavardhana
e9c5a51bc6 Migrate pkg/quick from mc 2015-08-13 16:29:55 -07:00
Harshavardhana
e4dc8ee7b4 Merge pull request #796 from harshavardhana/migrate-to-govendor
Migrate to govendor to avoid limitations of godep
2015-08-12 19:40:34 -07:00
Harshavardhana
61175ef091 Migrate to govendor to avoid limitations of godep
- over the course of a project history every maintainer needs to update
  its dependency packages, the problem essentially with godep is manipulating
  GOPATH - this manipulation leads to static objects created at different locations
  which end up conflicting with the overall functionality of golang.

  This also leads to broken builds. There is no easier way out of this other than
  asking developers to do 'godep restore' all the time. Which perhaps as a practice
  doesn't sound like a clean solution. On the other hand 'godep restore' has its own
  set of problems.

- govendor is a right tool but a stop gap tool until we wait for golangs official
  1.5 version which fixes this vendoring issue once and for all.

- govendor provides consistency in terms of how import paths should be handled unlike
  manipulation GOPATH.

  This has advantages
    - no more compiled objects being referenced in GOPATH and build time GOPATH
      manging which leads to conflicts.
    - proper import paths referencing the exact package a project is dependent on.

 govendor is simple and provides the minimal necessary tooling to achieve this.

 For now this is the right solution.
2015-08-12 19:24:57 -07:00
Harshavardhana
b4c8b4877e Merge pull request #795 from harshavardhana/pr_out_probe_revamped_to_provide_for_a_new_wrappederror_struct_to_wrap_probes_as_error_interface
Probe revamped to provide for a new WrappedError struct to wrap probes as error interface
2015-08-08 00:23:36 -07:00
Harshavardhana
45b59b8456 Probe revamped to provide for a new WrappedError struct to wrap probes as error interface
This convenience was necessary to be used for golang library functions like io.Copy and io.Pipe
where we shouldn't be writing proxies and alternatives returning *probe.Error

This change also brings more changes across code base for clear separation regarding where an error
interface should be passed encapsulating *probe.Error and where it should be used as is.
2015-08-08 00:16:38 -07:00
Harshavardhana
28d9565400 Merge pull request #793 from harshavardhana/pr_out_use_command_not_found_helper
use command not found helper
2015-08-03 18:25:57 -07:00
Harshavardhana
f8141493bd use command not found helper 2015-08-03 18:06:19 -07:00
Harshavardhana
b8461980bb Merge pull request #792 from harshavardhana/pr_out_migrate_from_iodine_to_probe
Migrate from iodine to probe
2015-08-03 16:36:04 -07:00
Harshavardhana
d09fd8b0a1 Migrate from iodine to probe 2015-08-03 16:33:44 -07:00
Harshavardhana
7f13095260 Merge pull request #791 from harshavardhana/pr_out_more_changes_to_probe_to_avoid_nil_dereferences
More changes to probe to avoid nil dereferences
2015-08-03 01:49:53 -07:00
Harshavardhana
884e9771b2 More changes to probe to avoid nil dereferences 2015-08-03 01:47:37 -07:00
Harshavardhana
a096913dde Merge pull request #790 from harshavardhana/pr_out_minor_changes_to_probe
Minor changes to probe
2015-08-02 20:35:10 -07:00
Harshavardhana
65e4aede82 Minor changes to probe 2015-08-02 20:33:49 -07:00
Anand Babu (AB) Periasamy
574f2aaafa Merge pull request #789 from abperiasamy/trace-on-new
trace on New and add read locks
2015-08-02 12:03:44 -07:00
Anand Babu (AB) Periasamy
697009c0a1 trace on New and add read locks 2015-08-02 11:58:28 -07:00
Harshavardhana
d9493909d8 Merge pull request #788 from krishnasrinivas/open-and-openfile
rename disk.OpenFile to Open which will do os.Open (which will be rea…
2015-08-02 11:05:04 -07:00
Krishna Srinivas
ee4432bc40 rename disk.OpenFile to Open which will do os.Open (which will be read-only). disk.OpenFile will do os.OpenFile (which can be rw, append) 2015-08-02 17:34:29 +05:30
Anand Babu (AB) Periasamy
aa8663f7cc Merge pull request #787 from abperiasamy/probe
probe package to trace & return errors
2015-08-02 02:41:10 -07:00
Anand Babu (AB) Periasamy
a728ddc027 probe package to trace & return errors 2015-08-02 02:38:08 -07:00
Harshavardhana
0a1ba049ba Merge pull request #786 from harshavardhana/pr_out_fix_command_template_typo_and_fix_others
fix command template typo and fix others.
2015-08-01 10:54:06 -07:00
Harshavardhana
81a7772fcd fix command template typo and fix others. 2015-08-01 10:51:02 -07:00
Harshavardhana
ae685f0548 Merge pull request #785 from harshavardhana/pr_out_deprecate_make_go_go_back_to_makefile_make_go_is_not_genversion_go 2015-07-31 17:18:35 -07:00
Harshavardhana
5d3379ed7e deprecate 'make.go', go back to Makefile - make.go is not genversion.go 2015-07-31 17:16:54 -07:00
Harshavardhana
1a27e5eb28 Merge pull request #784 from harshavardhana/pr_out_merge_cmd_donut_into_minio_cmd_deprecate_controller_rpc_request
Merge cmd/donut into minio cmd, deprecate controller RPC request
2015-07-31 13:06:53 -07:00
Harshavardhana
aabfd541e1 Merge cmd/donut into minio cmd, deprecate controller RPC request 2015-07-31 12:57:15 -07:00
Harshavardhana
6be5a2fb7e Merge pull request #783 from harshavardhana/pr_out_crypto_cleanup_remove_unused_functions
crypto/cleanup: remove unused functions
2015-07-29 13:12:04 -07:00
Harshavardhana
2671b2dbf4 crypto/cleanup: remove unused functions 2015-07-29 13:09:55 -07:00
Harshavardhana
179c5441c3 Merge pull request #782 from krishnasrinivas/no-signature-no-sha256
when signature is not available there is no need to compute sha256
2015-07-29 11:21:40 -07:00
Krishna Srinivas
fdd2c22fa5 Do go fmt on bucket.go 2015-07-29 23:49:25 +05:30
Krishna Srinivas
ae8089c9b6 when signature is not available there is no need to compute sha256 2015-07-29 21:09:02 +05:30
Krishna Srinivas
bdc00624fd get erros from buffered channel. Return error during object read only if we have readers < k 2015-07-29 19:36:41 +05:30
Harshavardhana
de74ae9d59 Merge pull request #781 from harshavardhana/pr_out_minor_add_commands_into_donut_template
minor: Add commands into donut template
2015-07-28 19:42:19 -07:00
Harshavardhana
f15375426a minor: Add commands into donut template 2015-07-28 19:40:02 -07:00
Harshavardhana
d70695fc6f Merge pull request #780 from harshavardhana/pr_out_collapse_getpartialobject_into_getobject_
Collapse GetPartialObject() into GetObject()
2015-07-28 19:35:50 -07:00
Harshavardhana
d346250f1c Collapse GetPartialObject() into GetObject() 2015-07-28 19:33:56 -07:00
Harshavardhana
09aeabcf40 Merge pull request #779 from harshavardhana/pr_out_add_krishna_and_nate_into_contributors_list
Add krishna and nate into contributors list
2015-07-28 16:56:53 -07:00
Harshavardhana
1a47c47f11 Add krishna and nate into contributors list 2015-07-28 16:38:21 -07:00
Harshavardhana
6707e58bb3 Merge pull request #777 from krishnasrinivas/missingEncodedBlocksCount
Use missingEncodedBlocksCount directly instead of "-1" workaround in …
2015-07-28 11:14:29 -07:00
Krishna Srinivas
e1280779ed Read from the disks in parallel during object read 2015-07-28 18:04:17 +05:30
Krishna Srinivas
1ea91d2fa2 Use missingEncodedBlocksCount directly instead of "-1" workaround in missingEncodedBlocks[]
Makes Code more readable
2015-07-28 16:56:06 +05:30
Harshavardhana
55d22fa8d6 Merge pull request #776 from harshavardhana/dep_branch
Fix dependency checking on osx
2015-07-27 18:26:39 -07:00
Harshavardhana
c503bf412f Fix dependency checking on osx 2015-07-27 18:19:22 -07:00
Harshavardhana
7133513600 Merge pull request #775 from harshavardhana/pr_out_strip_off_quotes_from_etag_for_verifying_complete_multipart_upload
Strip off quotes from ETag for verifying complete multipart upload
2015-07-25 23:12:39 +00:00
Harshavardhana
b0ea64a04f Strip off quotes from ETag for verifying complete multipart upload 2015-07-25 16:10:05 -07:00
Harshavardhana
e082f26e10 Improving EncoderStream to return error only upon non io.EOF.
io.EOF is okay since io.ReadFull will not have read any bytes at all.

Also making error channel receive only for go routine.
2015-07-25 15:57:30 -07:00
Harshavardhana
4ac23d747c Merge pull request #773 from krishnasrinivas/put-object-stream
Encoder now directly reads from the object stream. Using split.Stream…
2015-07-25 22:39:18 +00:00
Krishna Srinivas
bcfaa12a4d Encoder now directly reads from the object stream. Using split.Stream() was causing lot of redundant memory operations. 2015-07-26 03:54:39 +05:30
Harshavardhana
ed07310471 Merge pull request #772 from harshavardhana/pr_out_use_new_app_extrainfo_inside_minio_and_donut_commands_properly
use new app.ExtraInfo inside minio and donut commands properly
2015-07-25 06:56:54 +00:00
Harshavardhana
0eefbdef0c use new app.ExtraInfo inside minio and donut commands properly 2015-07-24 23:55:18 -07:00
Harshavardhana
8346cc74db Merge pull request #771 from harshavardhana/pr_out_rename_more
Rename more
2015-07-25 01:11:28 +00:00
Harshavardhana
d6a0e0cc55 Rename more 2015-07-24 18:09:53 -07:00
Harshavardhana
80b7bc7ccc Merge pull request #770 from harshavardhana/pr_out_move_from_minimalist_object_storage_to_minio_cloud_storage
Move from Minimalist Object Storage to Minio Cloud Storage
2015-07-25 00:53:49 +00:00
Harshavardhana
63c9cf0c4b Move from Minimalist Object Storage to Minio Cloud Storage 2015-07-24 17:51:40 -07:00
Harshavardhana
b9c8b04092 Merge pull request #769 from harshavardhana/pr_out_update_slack_secure_builds
Update slack secure builds
2015-07-23 21:43:03 +00:00
Harshavardhana
7aa4e70bb3 Update slack secure builds 2015-07-23 14:41:19 -07:00
Harshavardhana
3b9043c2ac Merge pull request #768 from harshavardhana/pr_out_listobjects_now_considers_multipart_objects_also_move_to_upstream_check_v1
ListObjects now considers multipart objects, also move to upstream check.v1
2015-07-18 22:52:50 +00:00
Harshavardhana
43c908d5b9 ListObjects now considers multipart objects, also move to upstream check.v1 2015-07-18 15:49:41 -07:00
Harshavardhana
e397fa48c4 Merge pull request #767 from harshavardhana/pr_out_remove_unnecessary_updateat_
remove unnecessary updateAt()
2015-07-18 02:53:41 +00:00
Harshavardhana
966786c78e remove unnecessary updateAt() 2015-07-17 19:48:09 -07:00
Harshavardhana
ebb2d4ec5f Merge pull request #766 from harshavardhana/pr_out_donut_multipart_support_initial_cut_wip
Donut multipart support no get, listobjects() support yet
2015-07-17 23:26:50 +00:00
Harshavardhana
c1da10a207 Donut multipart support no get, listobjects() support yet 2015-07-17 16:23:51 -07:00
Harshavardhana
107ca4cbad Update README.md 2015-07-16 23:57:02 -07:00
Harshavardhana
e81f44f623 Merge pull request #765 from harshavardhana/pr_out_add_corresponding_tests 2015-07-17 00:42:03 +00:00
Harshavardhana
86a887f9d4 Add corresponding tests 2015-07-16 17:40:19 -07:00
Harshavardhana
5eae32f2b0 Return proper InvalidArgument messages like s3 for invalid data for ListObjects(), ListObjectParts(), ListMultipartUploads() 2015-07-16 17:22:45 -07:00
Harshavardhana
9d525ecadc Merge pull request #763 from harshavardhana/pr_out_method_not_allowed_is_right_response_for_delete_operations_and_add_tests
Method not allowed is right response for DELETE() operations and add tests
2015-07-16 21:36:05 +00:00
Harshavardhana
e605787e65 Method not allowed is right response for DELETE() operations and add tests 2015-07-16 14:15:24 -07:00
Harshavardhana
52f21e6696 Merge pull request #761 from harshavardhana/pr_out_use_updatedeps_script_to_update_godeps 2015-07-16 19:59:21 +00:00
Harshavardhana
e4543489fe Use updatedeps script to update godeps 2015-07-16 12:57:32 -07:00
Harshavardhana
245ea26395 Merge pull request #760 from harshavardhana/pr_out_fix_an_issue_with_reusing_closed_network_connetion_changing_the_way_ratelimitedlistener_is_initialized
Fix an issue with reusing closed network connetion, changing the way rateLimitedListener is initialized
2015-07-16 17:59:50 +00:00
Harshavardhana
1f2e6a40a0 Fix an issue with reusing closed network connetion, changing the way rateLimitedListener is initialized 2015-07-16 10:56:14 -07:00
Harshavardhana
dc11a411c7 Merge pull request #759 from harshavardhana/pr_out_heal_buckets_upon_init_if_needed_adding_new_disks_and_hup_works
Heal buckets upon init if needed, adding new disks and HUP works
2015-07-16 17:00:36 +00:00
Harshavardhana
5507a39840 Heal buckets upon init if needed, adding new disks and HUP works 2015-07-16 09:59:05 -07:00
Harshavardhana
00a701a155 Merge pull request #758 from harshavardhana/pr_out_add_file_method_to_ratelimitlistener_for_extracting_underlying_fd_
Add File() method to Ratelimitlistener for extracting underlying fd()
2015-07-16 07:13:10 +00:00
Harshavardhana
e4574c7d6f Add File() method to Ratelimitlistener for extracting underlying fd() 2015-07-16 00:11:17 -07:00
Harshavardhana
2e8f154f34 Iodine should indent with EmitJSON() 2015-07-16 00:10:55 -07:00
Harshavardhana
a23a61da11 Merge pull request #757 from harshavardhana/pr_out_remove_scsi_non_portable_code_instead_donut_make_implements_functionality_to_instantiate_a_donut
Remove scsi non portable code, instead "donut make" implements functionality to instantiate a donut
2015-07-15 18:59:14 +00:00
Harshavardhana
4498662c16 Remove scsi non portable code, instead "donut make" implements functionality to instantiate a donut 2015-07-15 11:55:57 -07:00
Harshavardhana
49c705da99 Merge pull request #756 from harshavardhana/pr_out_fix_a_crash_during_listobjects_populating_nextmarker
Fix a crash during listObjects() populating NextMarker
2015-07-15 17:06:15 +00:00
Harshavardhana
6baf45e360 Fix a crash during listObjects() populating NextMarker 2015-07-15 10:04:15 -07:00
Harshavardhana
a204f53eac Merge pull request #755 from harshavardhana/pr_out_add_initial_version_of_heal_remove_rebalance
Add initial version of heal, remove rebalance
2015-07-15 05:59:20 +00:00
Harshavardhana
2553654e24 Add initial version of heal, remove rebalance 2015-07-14 22:56:41 -07:00
Harshavardhana
6a64019659 Merge pull request #754 from harshavardhana/pr_out_live_multiple_disk_removal_works_properly
Live multiple disk removal works properly
2015-07-15 03:50:05 +00:00
Harshavardhana
e37c5315ec Live multiple disk removal works properly 2015-07-14 20:46:14 -07:00
Harshavardhana
48bb69701e Merge pull request #753 from harshavardhana/pr_out_wire_up_sha512_matching_inside_donut_along_with_md5sum
Wire up sha512 matching inside donut along with md5sum
2015-07-15 02:49:39 +00:00
Harshavardhana
e1e4908515 Wire up sha512 matching inside donut along with md5sum 2015-07-14 19:47:50 -07:00
Harshavardhana
2e5e85d8ad Merge pull request #752 from harshavardhana/pr_out_handle_removal_of_disks_getobject_now_reads_if_disks_are_missing_underneath_add_initial_stub_healing_code
Handle removal of disks - getObject() now reads if disks are missing underneath, add initial stub healing code
2015-07-15 01:54:41 +00:00
Harshavardhana
e885259584 Handle removal of disks - getObject() now reads if disks are missing underneath, add initial stub healing code 2015-07-14 18:53:00 -07:00
Harshavardhana
74853caf0c Merge pull request #751 from harshavardhana/pr_out_return_x_amz_request_id_for_all_replies
Return x-amz-request-id for all replies
2015-07-14 21:52:09 +00:00
Harshavardhana
efbf3eabb7 Return x-amz-request-id for all replies 2015-07-14 14:44:03 -07:00
Harshavardhana
518ecae711 Merge pull request #750 from harshavardhana/pr_out_remove_global_custom_config_path_variabls_use_get_set_methods_instead 2015-07-14 21:04:15 +00:00
Anand Babu (AB) Periasamy
7e2df542f1 Remove command name from title. 2015-07-14 11:58:37 -07:00
Anand Babu (AB) Periasamy
352a2dd34b Update README.md 2015-07-14 11:56:47 -07:00
Harshavardhana
da8b9fd112 Remove global custom config path variables, use get/set methods instead 2015-07-14 11:56:15 -07:00
Anand Babu (AB) Periasamy
893782ffab Update README.md 2015-07-14 10:57:53 -07:00
Harshavardhana
326e1364be Merge pull request #749 from harshavardhana/pr_out_read_req_body_for_putbucket_if_any
Read req.Body for PutBucket() if any
2015-07-14 16:33:37 +00:00
Harshavardhana
c4cf7635bf Read req.Body for PutBucket() if any 2015-07-14 09:30:10 -07:00
Harshavardhana
e4dd49bdd9 Merge pull request #748 from harshavardhana/pr_out_fix_an_ugly_multipart_bug
Fix an ugly multipart bug
2015-07-14 04:40:24 +00:00
Harshavardhana
45ddec925c Fix an ugly multipart bug 2015-07-13 21:38:01 -07:00
Harshavardhana
00acc47158 Merge pull request #747 from harshavardhana/pr_out_for_missing_parts_reply_back_as_invalidpart_
For missing parts reply back as InvalidPart{}
2015-07-14 02:32:56 +00:00
Harshavardhana
7ae60a6d10 For missing parts reply back as InvalidPart{} 2015-07-13 19:31:01 -07:00
Harshavardhana
fdee527a4d Merge pull request #746 from harshavardhana/pr_out_writeobject_encoded_data_in_go_routine_fix_another_multipart_issue
WriteObject() encoded data in go routine, fix another multipart issue
2015-07-13 22:58:59 +00:00
Harshavardhana
634f70f3b0 WriteObject() encoded data in go routine, fix another multipart issue 2015-07-13 15:56:54 -07:00
Harshavardhana
70e56dc594 Merge pull request #745 from harshavardhana/pr_out_remove_dependency_on_minio_cli_for_make_go_fixes_738
Remove dependency on minio/cli for make.go fixes #738
2015-07-13 18:41:21 +00:00
Harshavardhana
1e80925ca7 Remove dependency on minio/cli for make.go fixes #738 2015-07-13 11:39:28 -07:00
Harshavardhana
dfa43f7982 Merge pull request #744 from harshavardhana/pr_out_add_a_ratelimited_listener_than_a_ratelimited_handler_more_precise
Add a ratelimited listener than a ratelimited handler - more precise
2015-07-13 18:16:27 +00:00
Harshavardhana
1bad92356d Add a ratelimited listener than a ratelimited handler - more precise 2015-07-13 11:12:17 -07:00
Anand Babu (AB) Periasamy
8af5933b07 Merge pull request #743 from flandr/fix-osx
Fix OS X build
2015-07-13 10:15:53 -07:00
Nate Rosenblum
ec347f96fd Fix OS X build
- Explicitly cast statfs_t members to int64 (this structure is
   platform-specific)
 - Add pass-through New methods to Darwin SHA package
 - Move scsi pkg types to common translation unit (package was empty)
 - Add stub implementations mount/disk ops for OS X
2015-07-13 10:06:55 -07:00
Harshavardhana
07eb9d9a4c Merge pull request #742 from harshavardhana/pr_out_add_mkdonut_examples
Add mkdonut examples
2015-07-13 04:40:20 +00:00
Harshavardhana
f360ee0ab5 Add mkdonut examples 2015-07-12 21:37:57 -07:00
Harshavardhana
36904d5da9 Merge pull request #741 from harshavardhana/pr_out_mkdonut_now_creates_a_donut_processing_cli_args
mkdonut now creates a donut processing cli args
2015-07-13 04:26:45 +00:00
Harshavardhana
55e4d0c6a5 mkdonut now creates a donut processing cli args 2015-07-12 21:21:31 -07:00
Harshavardhana
10fd5b8092 Merge pull request #740 from harshavardhana/pr_out_add_mkdonut_command
Add mkdonut command
2015-07-13 02:29:53 +00:00
Harshavardhana
535bcc3eac Add mkdonut command 2015-07-12 19:16:36 -07:00
Harshavardhana
4c95775864 Merge pull request #739 from harshavardhana/pr_out_renaming_nimble_to_minhttp 2015-07-13 01:05:29 +00:00
Harshavardhana
58a1d865a9 Renaming nimble to minhttp 2015-07-12 18:01:52 -07:00
Harshavardhana
65d97fa131 Merge pull request #737 from harshavardhana/pr_out_add_signature_v4_tests
Add signature v4 tests
2015-07-12 20:37:52 +00:00
Harshavardhana
847440196e Add signature v4 tests 2015-07-12 13:36:02 -07:00
Harshavardhana
74e44cd0c4 Merge pull request #736 from harshavardhana/pr_out_head_shouldn_t_have_any_body_handle_it_in_writeerrorresponse_
HEAD shouldn't have any body, handle it in writeErrorResponse()
2015-07-11 17:36:55 +00:00
Harshavardhana
7615a6bfe5 HEAD shouldn't have any body, handle it in writeErrorResponse() 2015-07-11 10:34:55 -07:00
Harshavardhana
7fde241ee2 Merge pull request #735 from harshavardhana/pr_out_cached_api_test_should_also_have_a_custom_config_path_would_conflict_with_your_minio_local_path 2015-07-11 03:43:35 +00:00
Harshavardhana
8977f9a524 cached api test should also have a custom config path, would conflict with your .minio local path 2015-07-10 20:39:30 -07:00
Harshavardhana
0a8ea1aca5 Merge pull request #734 from harshavardhana/pr_out_do_not_reply_on_ignoredheaders_for_server_rely_on_signedheaders_sent_as_part_of_authorization_header 2015-07-11 02:40:33 +00:00
Harshavardhana
97d4a27c7e Do not reply on ignoredHeaders for server, rely on SignedHeaders sent as part of Authorization header 2015-07-10 19:37:22 -07:00
Harshavardhana
538572ca91 Merge pull request #733 from harshavardhana/pr_out_nodejs_http_library_sends_connection_header_during_http_request_this_clobbers_up_the_signature_handling_ignore_it 2015-07-10 22:50:51 +00:00
Harshavardhana
53f5d2c32b nodejs http library sends Connection header during HTTP request, this clobbers up the signature handling ignore it 2015-07-10 15:44:47 -07:00
Harshavardhana
d0eb4a2aea Merge pull request #732 from harshavardhana/pr_out_cleanup_temporary_writers_upon_errors_during_putobject_all_metadata_write_operations
Cleanup temporary writers upon errors during putObject(), all metadata() write operations
2015-07-10 21:13:59 +00:00
Harshavardhana
29838bb851 Cleanup temporary writers upon errors during putObject(), all metadata() write operations 2015-07-10 14:11:04 -07:00
Harshavardhana
6860b310c9 Merge pull request #731 from harshavardhana/pr_out_support_signature_v4_at_rest
Support signature v4 at rest
2015-07-10 18:52:03 +00:00
Harshavardhana
15dd0df187 Support signature v4 at rest 2015-07-10 11:49:27 -07:00
Harshavardhana
39026cb64b Merge pull request #730 from harshavardhana/pr_out_rename_definitions_to_log_go_add_valid_prefixes 2015-07-10 18:13:25 +00:00
Harshavardhana
7fa514351c Rename definitions to log.go, add valid prefixes 2015-07-10 11:11:20 -07:00
Harshavardhana
d7acf90b81 Merge pull request #729 from harshavardhana/pr_out_add_abbreviated_close_response_to_avoid_any_leaks 2015-07-10 17:21:59 +00:00
Harshavardhana
d5ffc16f25 Add abbreviated close response, to avoid any leaks 2015-07-10 10:20:00 -07:00
Harshavardhana
f1d52df130 Merge pull request #728 from harshavardhana/pr_out_handle_both_space_and_non_space_characters_in_signature_v4 2015-07-10 04:52:01 +00:00
Harshavardhana
e5006c738d Handle both space and non-space characters, in signature v4 - add errors for all API's 2015-07-09 21:49:58 -07:00
Harshavardhana
eac92d4647 Merge pull request #727 from harshavardhana/pr_out_all_other_api_s_now_support_signature_v4
All other API's now support signature v4
2015-07-10 02:47:32 +00:00
Harshavardhana
84f427f14a All other API's now support signature v4 2015-07-09 19:45:56 -07:00
Harshavardhana
00890c254e CompleteMultipartUpload and CreateObjectPart now fully support signature v4 2015-07-09 19:01:15 -07:00
Harshavardhana
89c1215194 PutObject handler gets initial support for signature v4, working 2015-07-09 16:44:38 -07:00
Harshavardhana
4f29dc9134 Merge pull request #724 from harshavardhana/pr_out_add_mountinfo_functions_for_detecting_mount_disks 2015-07-09 19:27:10 +00:00
Harshavardhana
d461fa5ab1 Add mountinfo functions for detecting mount disks, and other rpc changes 2015-07-09 12:25:29 -07:00
Harshavardhana
0f0d0c65e7 Merge pull request #723 from harshavardhana/pr_out_generate_auth_now_saves_in_home_minio_users_json_also_authhandler_verifies_request_validity 2015-07-09 04:54:42 +00:00
Harshavardhana
8654ddb566 Generate auth now saves in ${HOME}/.minio/users.json, also authHandler verifies request validity 2015-07-08 21:53:13 -07:00
Harshavardhana
51d2d8e221 Merge pull request #722 from harshavardhana/pr_out_http_header_content_length_signifies_body_length_of_the_request_if_its_smaller_reply_appropriately
HTTP header Content-Length signifies body length of the request, if its smaller reply appropriately
2015-07-09 03:59:08 +00:00
Harshavardhana
375860077d HTTP header Content-Length signifies body length of the request, if its smaller reply appropriately
This patch also handles large individual part sizes > 5MB by using less memory copies.
2015-07-08 20:56:41 -07:00
Harshavardhana
cea093bf65 Merge pull request #721 from harshavardhana/pr_out_add_server_side_signaturev4_check_not_wired_up_to_the_readers_yet 2015-07-08 23:59:01 +00:00
Harshavardhana
ec33d79d57 Add server side signaturev4 check, not wired up to the readers yet. 2015-07-08 16:57:03 -07:00
Harshavardhana
a904cb5002 Merge pull request #720 from harshavardhana/pr_out_add_auth_rpc_service_to_generate_access_keys 2015-07-08 21:43:17 +00:00
Harshavardhana
396b728031 Add auth rpc service to generate access keys, add corresponding test 2015-07-08 14:40:39 -07:00
Harshavardhana
770fd23afa Renaming keys as auth, working towards signature v4 support for all put objects 2015-07-08 14:17:16 -07:00
Harshavardhana
b71f15d32d Merge pull request #719 from harshavardhana/pr_out_fix_ssl_support_pointer_indirection_caused_nil_buffers 2015-07-08 18:04:26 +00:00
Harshavardhana
2413a110e6 Fix SSL support, pointer indirection caused nil buffers 2015-07-08 11:02:15 -07:00
Harshavardhana
f737c0540e Merge pull request #718 from harshavardhana/pr_out_add_api_tests_for_both_donut_on_disk_and_donut_cache
Add API tests for both donut on disk and donut cache
2015-07-08 02:41:44 +00:00
Harshavardhana
d1deda3a96 Add API tests for both donut on disk and donut cache 2015-07-07 19:39:46 -07:00
Harshavardhana
c76d4f6cdd Merge pull request #717 from harshavardhana/pr_out_add_rpc_tests 2015-07-08 00:31:49 +00:00
Harshavardhana
ece797c16e Add rpc tests 2015-07-07 17:27:34 -07:00
Harshavardhana
676b9058de Separate out memory statistics and system information into two different services 2015-07-07 16:59:20 -07:00
Harshavardhana
8abb96c030 If NodeDisks are not empty do not impose cache maxSize restriction 2015-07-07 16:41:40 -07:00
Harshavardhana
efa91474e7 Merge pull request #716 from harshavardhana/pr_out_add_nimblenet_tests
Add nimbleNet tests
2015-07-07 23:17:20 +00:00
Harshavardhana
a50a44b0ca Add nimbleNet tests 2015-07-07 16:15:47 -07:00
Harshavardhana
aea83a5fe4 Merge pull request #715 from harshavardhana/pr_out_add_net_addr_wrapper_with_isequal_and_use_it 2015-07-07 22:38:03 +00:00
Harshavardhana
317096a0c4 Add net.Addr wrapper with IsEqual() and use it. 2015-07-07 15:33:08 -07:00
Harshavardhana
11b893804c Moving os.MkdirAll() inside atomic for auto parent directory creates 2015-07-07 12:35:57 -07:00
Harshavardhana
5fc377ae57 Merge pull request #714 from harshavardhana/pr_out_move_atomic_file_writes_into_its_own_package_use_them_inside_quick_and_disk_packages 2015-07-07 19:31:42 +00:00
Harshavardhana
52cd23ad9f Move atomic file writes into its own package, use them inside quick and disk packages 2015-07-07 12:29:14 -07:00
Harshavardhana
fadadf0e1a Merge pull request #713 from harshavardhana/pr_out_across_donut_split_nimble_some_code_cleanup 2015-07-07 18:04:33 +00:00
Harshavardhana
3622fbc87d Across donut, split, nimble some code cleanup 2015-07-06 21:55:21 -07:00
Harshavardhana
7be35915bc Merge pull request #712 from harshavardhana/pr_out_add_multi_thread_protection_and_also_allow_atomic_file_creates_rename_upon_close_
Add multi-thread protection and also allow atomic file creates, rename upon Close()
2015-07-07 01:58:42 +00:00
Harshavardhana
bbb89b5776 Add multi-thread protection and also allow atomic file creates, rename upon Close() 2015-07-06 18:54:32 -07:00
Harshavardhana
c2c7bdf0cd Cleanup nimble http 2015-07-06 18:51:20 -07:00
Harshavardhana
5132fd84db Merge pull request #711 from harshavardhana/pr_out_avoid_config_reload_all_the_time_reload_is_manually_triggerred_from_outside 2015-07-07 00:29:46 +00:00
Harshavardhana
b029d0a5f0 Avoid config reload all the time, reload is manually triggerred from outside 2015-07-06 17:26:35 -07:00
Harshavardhana
8b94c53345 Fix issues with multipart upload 2015-07-06 16:22:27 -07:00
Harshavardhana
474954022e Add modified grace library from facebookgo, rename it as nimble 2015-07-06 15:40:12 -07:00
Harshavardhana
2c18c3be68 Merge pull request #710 from harshavardhana/pr_out_add_donut_rpc_service_for_sending_changes_to_configuration_files 2015-07-06 18:13:18 +00:00
Harshavardhana
1d64e4b6c1 Add Donut rpc service for sending changes to configuration files 2015-07-06 11:10:06 -07:00
Harshavardhana
57d634da25 Merge pull request #709 from harshavardhana/pr_out_add_updateconfig_code_to_load_config_changes_if_possible_for_every_function
Add updateConfig code to load config changes if possible for every function
2015-07-06 05:51:46 +00:00
Harshavardhana
10b082144e Add updateConfig code to load config changes if possible for every function 2015-07-05 22:46:42 -07:00
Harshavardhana
36835befe6 Merge pull request #708 from harshavardhana/pr_out_add_sighup_sigusr2_into_trapping_code_to_trap_signals_for_reloading_configuration
Add sighup, sigusr2 into trapping code, to trap signals for reloading configuration.
2015-07-06 04:43:31 +00:00
Harshavardhana
ba0a5ed416 Add sighup, sigusr2 into trapping code, to trap signals for reloading configuration.
Need to still figure out a way of graceful restarts - gave facebookgo/httpdown a shot,
but it is not suitable.
2015-07-05 21:40:53 -07:00
Harshavardhana
a74a2db8f0 Merge pull request #707 from harshavardhana/pr_out_fix_another_deadlock_inside_createobjectpart_code_premature_return_without_unlocking_ 2015-07-06 03:29:25 +00:00
Harshavardhana
4a27ab0e58 Fix another deadlock inside CreateObjectPart() code, premature return without Unlocking()
Also this patch changes the cache key element to be interface{} type not string.
2015-07-05 20:26:32 -07:00
Harshavardhana
d0386dbce0 Merge pull request #706 from harshavardhana/pr_out_fix_go_installation_check_on_amazon_instance
Fix go installation check on amazon instance
2015-07-06 01:14:44 +00:00
Harshavardhana
75788c7a1d Fix go installation check on amazon instance 2015-07-05 18:12:58 -07:00
Harshavardhana
46ab20dcee Merge pull request #705 from harshavardhana/pr_out_add_basic_controller_code_initiating_json_rpc_connection_getting_list_of_disks_and_memstats_for_now
Add basic controller code, initiating json rpc connection getting list of disks and memstats for now.
2015-07-06 00:20:41 +00:00
Harshavardhana
7f0c14f2b7 Add basic controller code, initiating json rpc connection getting list of disks and memstats for now. 2015-07-05 17:17:41 -07:00
Harshavardhana
75a32d1c01 Merge pull request #704 from harshavardhana/pr_out_rename_stuttered_service_names_and_make_them_appropriate 2015-07-05 17:23:22 +00:00
Harshavardhana
a3ccb9d405 Rename stuttered service names and make them appropriate 2015-07-05 10:19:23 -07:00
Harshavardhana
18a8891a15 Merge pull request #703 from harshavardhana/pr_out_minor_changes_to_command_templates 2015-07-05 17:15:20 +00:00
Harshavardhana
adc0a1063c Minor changes to command templates 2015-07-05 10:13:41 -07:00
Harshavardhana
486b82e950 Merge pull request #702 from harshavardhana/pr_out_add_disk_detection_for_linux_add_new_rpc_service_getdiskinfoservice_remove_dummy_helloservice_ 2015-07-05 09:10:03 +00:00
Harshavardhana
e66a84242a Add disk detection for Linux, add new RPC service GetDiskInfoService(), remove dummy HelloService() 2015-07-05 02:08:33 -07:00
Harshavardhana
181727ab57 Merge pull request #701 from harshavardhana/pr_out_move_to_container_list_datastructure_from_map_string_byte
Move to container/list datastructure from map[string][]byte
2015-07-05 00:10:25 +00:00
Harshavardhana
bab4a47525 Move to container/list datastructure from map[string][]byte 2015-07-04 17:08:23 -07:00
Harshavardhana
d11dfe003c Merge pull request #700 from harshavardhana/pr_out_implement_new_cpu_detection_using_cpuid_cpuidex_plan9_instructions_from_klauspost_cpuid_project_remove_c_code
Implement new CPU detection using cpuid, cpuidex plan9 instructions from klauspost/cpuid project, remove C code
2015-07-04 21:30:53 +00:00
Harshavardhana
aa67a19e99 Implement new CPU detection using cpuid, cpuidex plan9 instructions from klauspost/cpuid project, remove C code 2015-07-04 14:28:16 -07:00
Harshavardhana
9977888972 Merge pull request #698 from harshavardhana/pr_out_implement_metadata_cache_metadata_cache_is_used_by_top_level_donut_right_now_rename_trove_as_data_cache
Implement metadata cache, metadata cache is used by top level donut right now. Rename trove as data cache
2015-07-04 04:12:39 +00:00
Harshavardhana
0a827305ad Implement metadata cache, metadata cache is used by top level donut right now. Rename trove as data cache
We should use it internally everywhere.
2015-07-03 21:09:57 -07:00
Harshavardhana
7d2609856e Merge pull request #697 from harshavardhana/pr_out_make_donut_do_everything_as_an_atomic_operation_this_avoids_all_the_deadlocks_and_races
Make donut do everything as an atomic operation, this avoids all the deadlocks and races
2015-07-04 00:18:45 +00:00
Harshavardhana
14844f48dd Make donut do everything as an atomic operation, this avoids all the deadlocks and races 2015-07-03 17:16:58 -07:00
Harshavardhana
86bcfed2da Merge pull request #696 from minio/server-cleanup
Server cleanup
2015-07-03 22:58:08 +00:00
Harshavardhana
30fc14e703 Restructure codebase move crypto, checksum to top-level, move `split` into donut, move crypto/keys into api/auth 2015-07-03 15:24:51 -07:00
Harshavardhana
8a4e7bcdcf Add full API tests, move storage/donut to donut, add disk tests as well 2015-07-03 14:36:29 -07:00
Harshavardhana
7c37e9d06a Make donut fully integrated back into API handlers 2015-07-02 21:04:04 -07:00
Harshavardhana
12bde7df30 Add simple Ticket Master which pro-actively sends messages on proceedChannel
Handlers are going to wait on proceedChannel, this the initial step towards
providing priority for different set of API operations
2015-07-02 21:04:04 -07:00
Harshavardhana
5cfb05465e Add cache, donut tests separately - fix behavior differences
Remove priority queue, implement it using a simpler channels
2015-07-02 21:04:04 -07:00
Harshavardhana
ebe61d99d9 Use cache Append() for saving objects in memory, GetObject() caches un-cached entries while reading 2015-07-02 21:04:04 -07:00
Harshavardhana
bce93c1b3a Integrate cache with donut, add tests 2015-07-02 21:04:04 -07:00
Harshavardhana
0533abf6a8 Make priority queue lambda function return error over a channel 2015-07-02 21:04:04 -07:00
Harshavardhana
38a6ce36e5 Remove slow AppendUniq code, rolling through over a slice is in-efficient
Remove it and use map instead
2015-07-02 21:04:04 -07:00
Harshavardhana
84810162f5 Add simple Version and GetSysInfo services 2015-07-02 21:04:04 -07:00
Harshavardhana
14ec42d646 Add initial implementation of priority queue, uses container/heap 2015-07-02 21:04:04 -07:00
Harshavardhana
eb5aa19dfa Remove custom Config, will use quick Config instead for user access keys 2015-07-02 21:04:04 -07:00
Harshavardhana
701c3e5242 Add new RPC helpers wrapping over regular rpc packages, add middleware chaining ability 2015-07-02 21:04:04 -07:00
Harshavardhana
188785a886 Add and remove dependencies 2015-07-02 21:04:04 -07:00
Harshavardhana
4addf7a996 Restructure API handlers, add JSON RPC simple HelloService right now. 2015-07-02 21:04:04 -07:00
Harshavardhana
335c7827eb More donut, cache, api cleanup 2015-07-02 21:04:04 -07:00
Harshavardhana
dc0df3dc0e Breakaway from driver model, move cache into donut 2015-07-02 21:04:03 -07:00
Harshavardhana
72572d6c71 Remove some api server code bringing in new cleanup 2015-07-02 21:04:03 -07:00
Harshavardhana
c2031ca066 Add server and control command 2015-07-02 21:04:03 -07:00
Frederick F. Kautz IV
101784bc44 Merge pull request #695 from fkautz/pr_out_fixing_api_definitions 2015-07-02 13:16:49 -07:00
Frederick F. Kautz IV
cfbc169034 Fixing API definitions 2015-07-02 13:14:21 -07:00
Harshavardhana
1d31c76dd6 Merge pull request #694 from harshavardhana/pr_out_move_memory_code_out_add_it_as_layer_on_top_of_existing_cache_code_wip 2015-06-30 17:12:29 +00:00
Harshavardhana
8f61d6b6be Move memory code out, add it as layer on top of existing donut code
Just like how http.Handlers can be overlayed on top of each other
with each implementing ServeHTTP().

drivers.Driver can be overlayed on top of each other in similar manner
which would implement the drivers.Driver interface.

   API <----> cache <----> donut <----> donut(format)
2015-06-30 10:09:12 -07:00
Harshavardhana
fe3c618cc7 Merge pull request #693 from harshavardhana/pr_out_add_dummy_driver_for_community_to_submit_new_drivers 2015-06-29 23:48:37 +00:00
Harshavardhana
ab6e16bb41 Add dummy driver for community to submit new drivers 2015-06-29 16:43:50 -07:00
Harshavardhana
12de98fb62 Rename memory driver as cache 2015-06-29 16:43:50 -07:00
Harshavardhana
2571342451 Filesystem goes the high road *again* 2015-06-29 16:43:42 -07:00
Harshavardhana
c4c67581dc Merge pull request #692 from harshavardhana/pr_out_isvalidbucket_is_sufficient_we_don_t_need_to_verify_for_ 2015-06-29 22:27:11 +00:00
Harshavardhana
f74d6138da IsValidBucket() is sufficient we don't need to verify for "." 2015-06-29 15:15:54 -07:00
Harshavardhana
b5a5861c8f Merge pull request #691 from harshavardhana/pr_out_handle_couple_of_cases_of_oom_conditions_move_caching_to_getobject_rather_than_putobject_
Handle couple of cases of OOM conditions, move caching to GetObject() rather than PutObject()
2015-06-29 19:33:55 +00:00
Harshavardhana
3109909355 Handle couple of cases of OOM conditions, move caching to GetObject() rather than PutObject() 2015-06-29 12:28:50 -07:00
Harshavardhana
d07d0c670a Return back proper errors in writeObjectData(), rename few functions 2015-06-29 11:46:35 -07:00
Harshavardhana
be816145a9 Merge pull request #690 from harshavardhana/pr_out_put_object_on_successful_write_returns_full_metadata_to_avoid_subsequent_getobjectmetadata_calls_in_driver 2015-06-29 18:21:33 +00:00
Harshavardhana
10c807f233 Put object on successful write returns full metadata, to avoid subsequent GetObjectMetadata() calls in driver 2015-06-29 11:15:46 -07:00
Harshavardhana
6921328b93 Avoid frivolous GetObjectMetadata() calls at driver level, return back all the information in donut ListObjects() 2015-06-29 11:14:58 -07:00
Harshavardhana
f05ad062ee Merge pull request #689 from harshavardhana/pr_out_expand_http_server_struct_to_store_more_values 2015-06-29 07:17:31 +00:00
Harshavardhana
d8f7896a43 Expand http server struct to store more values 2015-06-29 00:12:28 -07:00
Harshavardhana
63f9647c80 Merge pull request #688 from harshavardhana/pr_out_use_errorchannels_only_for_services_not_for_drivers_reduce_them_to_use_simple_functions 2015-06-29 07:02:24 +00:00
Harshavardhana
42c0287943 Use errorChannels only for services not for drivers, reduce them to use simple functions 2015-06-28 23:59:47 -07:00
Harshavardhana
b2bf90afbd Merge pull request #687 from harshavardhana/pr_out_move_to_set_not_append_due_to_large_memory_reference_copy 2015-06-28 17:15:06 +00:00
Harshavardhana
91e5f648cb Move to Set() not Append() due to large memory reference copy 2015-06-28 10:13:12 -07:00
Harshavardhana
22abe1b397 Merge pull request #686 from harshavardhana/pr_out_add_free_method_for_proxyreader_to_aggressively_de_allocate_read_data_to_handle_certain_out_of_memory_conditions
Add free() method for proxyReader to aggressively de-allocate Read data, to handle certain out of memory conditions
2015-06-28 03:45:48 +00:00
Harshavardhana
ac4d8fe478 Add free() method for proxyReader to aggressively de-allocate Read data, to handle certain out of memory conditions
There are still some more out there
2015-06-27 20:43:25 -07:00
Harshavardhana
a4d20d1e75 Merge pull request #685 from harshavardhana/pr_out_add_append_method_to_trove_cache_for_appending_data_to_an_existing_key
Add Append() method to trove cache for appending data to an existing key
2015-06-28 03:28:06 +00:00
Harshavardhana
05f8654e3d Add Append() method to trove cache for appending data to an existing key
This largely avoids a large buffer copy which would accumulate inside proxyReader{}

This patch also implements "initialize()" function to init and populate data
on all the existing buckets, avoiding the redundant ListBuckets() invoked by
every API call.
2015-06-27 20:25:24 -07:00
Harshavardhana
762aae7c32 Merge pull request #684 from harshavardhana/pr_out_make_sure_to_populate_on_disk_data_into_memory_upon_first_api_requests 2015-06-28 01:28:29 +00:00
Harshavardhana
367772b988 Make sure to populate on disk data into memory upon first API requests 2015-06-27 18:25:21 -07:00
Harshavardhana
350e6eb5bb Merge pull request #683 from harshavardhana/pr_out_add_proper_command_paramters_for_donut 2015-06-28 00:41:13 +00:00
Harshavardhana
07a6aafc94 Add proper command paramters for donut 2015-06-27 17:39:22 -07:00
Harshavardhana
c65969077d Merge pull request #682 from harshavardhana/pr_out_an_attempt_to_bring_in_memory_layer_into_donut_driver 2015-06-28 00:25:27 +00:00
Harshavardhana
45a7eab804 An attempt to bring in memory layer into donut driver 2015-06-27 17:23:34 -07:00
Harshavardhana
7ab16b5b83 Merge pull request #681 from harshavardhana/pr_out_keeping_the_lexical_order_same_add_optimizations_provide_a_comprehensive_response_from_listobjects_
Keeping the lexical order same add optimizations, provide a comprehensive response from ListObjects()
2015-06-27 20:15:24 +00:00
Harshavardhana
f3c25bcfc4 Keeping the lexical order same add optimizations, provide a comprehensive response from ListObjects() 2015-06-27 13:12:44 -07:00
Harshavardhana
795e48d492 Merge pull request #680 from harshavardhana/pr_out_rename_functions_for_their_purpose 2015-06-27 19:42:04 +00:00
Harshavardhana
ae66ae42c4 Rename functions for their purpose 2015-06-27 12:39:11 -07:00
Harshavardhana
9c2e861470 Merge pull request #679 from harshavardhana/pr_out_object_metadata_was_wrongly_misconstrued_to_be_mutable_handle_it 2015-06-27 06:26:21 +00:00
Harshavardhana
9a4680475f Object metadata was wrongly misconstrued to be mutable, handle it 2015-06-26 23:22:53 -07:00
Harshavardhana
39f26acbc9 Merge pull request #678 from harshavardhana/pr_out_handle_racy_map_updates_in_listobjects_on_a_bucket
Avoid racy maps, read from disk on success return quickly. Many more optimizations
2015-06-27 02:59:50 +00:00
Harshavardhana
3aa6d90c5e Avoid racy maps, read from disk on success return quickly. Many more optimizations 2015-06-26 19:49:37 -07:00
Harshavardhana
aab4937084 Merge pull request #677 from harshavardhana/pr_out_moving_to_more_typed_responses_this_removes_all_the_necessity_for_strconv
Donut moves to typed metadata, removing the necessity for strconv, parsing and string converstions
2015-06-26 23:26:17 +00:00
Harshavardhana
68974918ac Donut moves to typed metadata, removing the necessity for strconv, parsing and string converstions 2015-06-26 16:23:12 -07:00
Harshavardhana
e3d8a9e0f1 Merge pull request #676 from harshavardhana/pr_out_add_new_metadata_definitions_and_use_them_wip 2015-06-26 20:38:02 +00:00
Harshavardhana
767d3743ee Add new metadata definitions and use them 2015-06-26 13:34:09 -07:00
Harshavardhana
0cb3f76a91 Merge pull request #675 from harshavardhana/pr_out_import_quick_key_value_store_from_minio_client_for_persistent_state_files_primarily_for_donut
Import quick key value store from Minio Client for persistent state files, primarily for donut
2015-06-26 02:59:28 +00:00
Harshavardhana
9958e34772 Import quick key value store from Minio Client for persistent state files, primarily for donut 2015-06-25 19:57:31 -07:00
Harshavardhana
bd0dccd8f1 Merge pull request #674 from harshavardhana/pr_out_donut_cleanup_another_set
Donut cleanup another set
2015-06-26 01:59:45 +00:00
Harshavardhana
fb9adb5524 Donut cleanup another set
- Make sure to close all readers
- Fix errors in api_testsuite c.Assert(err, IsNil) should be done right after each function call
2015-06-25 18:54:34 -07:00
Harshavardhana
eec66f195a Take all the ListObjects into bucket handlers
Earlier the listing would wait for all the objects to be processed
this is essentially very time consuming considering even for 100,000
files.
2015-06-25 18:04:29 -07:00
Harshavardhana
8405c4d42f Merge pull request #673 from harshavardhana/pr_out_remove_more_bloated_code_simplify
Remove more bloated code - simplify
2015-06-25 21:14:43 +00:00
Harshavardhana
45e9d25931 Remove more bloated code - simplify 2015-06-25 13:02:08 -07:00
Harshavardhana
7ade42165f Merge pull request #672 from harshavardhana/pr_out_add_simple_locking_for_donut_api_for_now_fixes_671 2015-06-25 18:33:08 +00:00
Harshavardhana
82dcbf262d Add simple locking for donut API for now - fixes #671 2015-06-25 11:29:11 -07:00
Harshavardhana
5abcb7f348 Merge pull request #670 from harshavardhana/pr_out_go_vet_fixes_for_donut 2015-06-25 04:08:49 +00:00
Harshavardhana
03b4d3b308 Go vet fixes for donut 2015-06-24 21:07:03 -07:00
Harshavardhana
57a2b53178 Removing further bloated code simplifying 2015-06-24 21:03:39 -07:00
Harshavardhana
a2c205ff2e Use external package disk for donut. 2015-06-24 21:03:02 -07:00
Harshavardhana
841ff01412 Move disk into its own package, remove bloated code 2015-06-24 20:13:47 -07:00
Harshavardhana
1682c748ac Remove unnecessary interfaces from donut, cleanup 2015-06-24 19:43:38 -07:00
Harshavardhana
b915cc3611 Merge pull request #669 from harshavardhana/pr_out_add_sha256_and_sha512_windows_compatibility_layer 2015-06-24 21:43:26 +00:00
Harshavardhana
3498872467 Add sha256 and sha512 windows compatibility layer 2015-06-24 14:39:41 -07:00
Harshavardhana
2af863cefc Merge pull request #668 from harshavardhana/pr_out_fix_a_bug_on_windows_regarding_blocksse3_calculation 2015-06-24 21:29:34 +00:00
Harshavardhana
77d35b87d4 Fix a bug on windows regarding blockSSE3 calculation 2015-06-24 14:24:33 -07:00
Harshavardhana
1056e7e180 Merge pull request #667 from harshavardhana/pr_out_add_windows_code_for_sha1_and_crc32c 2015-06-24 21:19:54 +00:00
Harshavardhana
f1410731db Add windows code for sha1 and crc32c 2015-06-24 14:16:32 -07:00
Harshavardhana
fbee8f8122 Merge pull request #665 from harshavardhana/pr_out_fix_wrong_tmpfs_listing_in_document_filesystem_map 2015-06-23 23:17:21 -07:00
Harshavardhana
ba2d3dea74 Fix wrong TMPFS listing in donut filesystem map 2015-06-23 23:16:06 -07:00
Harshavardhana
792e6c2d3a Merge pull request #664 from harshavardhana/pr_out_trim_iodine_path_properly_so_that_now_errors_have_github_com_minio_minio_prefixed 2015-06-23 13:39:33 -07:00
Harshavardhana
e818bc7187 Trim iodine path properly, so that now errors have github.com/minio/minio prefixed 2015-06-23 13:36:54 -07:00
Harshavardhana
3e9141d22d Merge pull request #663 from harshavardhana/pr_out_add_missing_strongly_typed_errors_for_donut 2015-06-23 11:59:17 -07:00
Harshavardhana
2fd52ca284 Add missing strongly typed errors for Donut 2015-06-23 11:54:44 -07:00
Harshavardhana
a4fda9fa9c Merge pull request #662 from vadmeste/enhance_golang_env_check
Check if go binary belongs to the go installation pointed by GOROOT env
2015-06-23 10:35:01 -07:00
Anis Elleuch
4bed0aa526 Check if go binary belongs to the go installation pointed by GOROOT env 2015-06-23 17:44:05 +01:00
Harshavardhana
d0dd047bef Merge pull request #661 from harshavardhana/pr_out_fix_builddeps_paths_for_golang_installation 2015-06-22 12:09:10 -07:00
Harshavardhana
7a060110ff Fix BUILDDEPS paths for golang installation 2015-06-22 12:07:23 -07:00
Harshavardhana
c340a10e8f Merge pull request #660 from harshavardhana/pr_out_verify_d_donut_to_be_non_nil_usually_happens_when_multiple_go_versions_compilations_are_linked_possible_cause_for_659 2015-06-20 11:04:58 -07:00
Harshavardhana
3bf64f5669 Verify d.donut to be non-nil, usually happens when multiple go versions compilations are linked - possible cause for #659 2015-06-20 11:03:17 -07:00
Harshavardhana
cdeadae167 Merge pull request #657 from harshavardhana/pr_out_use_filepath_everywhere_instead_of_path_functions_for_portability_fixes_656 2015-06-18 16:42:29 -07:00
Harshavardhana
641f07cecf Use filepath everywhere instead of path.{} functions for portability - fixes #656 2015-06-18 16:02:45 -07:00
Harshavardhana
285b1cc5d8 Merge pull request #655 from harshavardhana/pr_out_remove_redundant_ok_for_map
Remove redundant !ok for map
2015-06-17 22:39:33 -07:00
Harshavardhana
573a6134b2 Remove redundant !ok for map 2015-06-17 22:36:46 -07:00
Harshavardhana
3842a57f52 Merge pull request #653 from harshavardhana/pr_out_hold_lock_on_getglobalstatekey_fixes_652
Hold lock on GetGlobalStateKey() - fixes #652
2015-06-17 20:44:03 -07:00
Harshavardhana
e9a3fd677a Hold lock on GetGlobalStateKey() - fixes #652 2015-06-17 20:37:49 -07:00
Harshavardhana
b793f53d48 Minior change filter() to filterObjects() 2015-06-17 20:35:44 -07:00
Harshavardhana
45424cfe52 Update README.md 2015-06-17 13:58:40 -07:00
981 changed files with 123597 additions and 70983 deletions

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
main.go ident

13
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,13 @@
## Expected behaviour
## Actual behaviour
## Steps to reproduce the behaviour
## Minio version
- (paste output of `minio version`)
## System information

8
.gitignore vendored
View File

@@ -1,4 +1,3 @@
/build-constants.go
**/*.swp
cover.out
*~
@@ -10,3 +9,10 @@ site/
/.idea/
/Minio.iml
**/access.log
build
vendor/**/*.js
vendor/**/*.json
release
.DS_Store
*.syso
coverage.txt

View File

@@ -6,10 +6,13 @@
#
# For explanation on this file format: man git-shortlog
Anand Babu (AB) Periasamy <ab@unlocksmith.org> <abperiasamy@users.noreply.github.com>
Anand Babu (AB) Periasamy <ab@minio.io> Anand Babu (AB) Periasamy <abperiasamy@users.noreply.github.com>
Anand Babu (AB) Periasamy <ab@minio.io> <ab@unlocksmith.org>
Anis Elleuch <vadmeste@gmail.com>
Frederick F. Kautz IV <fkautz@minio.io> <fkautz@alumni.cmu.edu>
Harshavardhana <harsha@minio.io> <harsha@harshavardhana.net>
Harshavardhana <harsha@minio.io> <badger@gitter.im>
Harshavardhana <harsha@minio.io>
Matthew Farrellee <matt@cs.wisc.edu>
Krishna Srinivas <krishna@minio.io> <krishna.srinivas@gmail.com>
Matthew Farrellee <matt@cs.wisc.edu>
Nate Rosenblum <flander@gmail.com>

4
.mention-bot Normal file
View File

@@ -0,0 +1,4 @@
{
"numFilesToCheck": 10,
"requiredOrgs": ["minio"]
}

View File

@@ -1,16 +1,23 @@
sudo: required
dist: trusty
language: go
before_install:
- git clone https://github.com/yasm/yasm
- cd yasm
- git checkout v1.2.0
- "./autogen.sh"
- "./configure"
- make
- export PATH=$PATH:`pwd`
- cd ..
sudo: false
os:
- linux
- osx
osx_image: xcode7.2
env:
- ARCH=x86_64
- ARCH=i686
script:
- make test GOFLAGS="-race"
- make coverage
after_success:
- bash <(curl -s https://codecov.io/bash)
go:
- 1.4.2
notifications:
slack:
secure: jlDBuqna7waJXJrl/EOeTH1fXgqJu3WrTDl0Sv7oJBNVH1Af4cGmHaa1oVWrYUMB7lEPjpuF+xcBNA+N+mcR53JbpqueR3sIKlokqHL4TPZBg4XX+1yqtmYMkL6V2woWQ7Wmtis0kDstSoVZjEUVHgk3YF8hcLlK49oMhTeqY08=
- 1.6.2

View File

@@ -3,23 +3,23 @@
If you do not have a working Golang environment setup please follow [Golang Installation Guide](./INSTALLGO.md).
### Setup your Minio Github Repository
Fork [Minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository. Copy the URL and pass it to ``go get`` command. Go uses git to clone a copy into your project workspace folder.
Fork [Minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository. Copy the URL for minio from your personal github repo (you will need it for the `git clone` command below).
```sh
$ mkdir -p $GOPATH/src/github.com/minio
$ cd $GOPATH/src/github.com/minio
$ git clone https://github.com/$USER_ID/minio
$ git clone <paste saved URL for personal forked minio repo>
$ cd minio
```
### Compiling Minio from source
Minio uses ``Makefile`` to wrap around some of the limitations of ``go`` build system. To compile Minio source, simply change to your workspace folder and type ``make``.
Minio uses ``Makefile`` to wrap around some of redundant checks done through command line.
```sh
$ make
Checking if proper environment variables are set.. Done
...
Checking dependencies for Minio.. Done
Installed godep
Installed cover
Installed govet
Building Libraries
...
...
@@ -37,8 +37,7 @@ $ make
Checking if proper environment variables are set.. Done
...
Checking dependencies for Minio.. Done
Installed godep
Installed cover
Installed govet
Building Libraries
...
```
@@ -52,20 +51,21 @@ Building Libraries
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
* If you have additional dependencies for ``Minio``, ``Minio`` manages its depedencies using [godep](https://github.com/tools/godep)
* If you have additional dependencies for ``Minio``, ``Minio`` manages its dependencies using [govendor](https://github.com/kardianos/govendor)
- Run `go get foo/bar`
- Edit your code to import foo/bar
- Run `make save` from top-level directory (or `godep restore && godep save ./...`).
- Run `make pkg-add PKG=foo/bar` from top-level directory
* If you have dependencies for ``Minio`` which needs to be removed
- Edit your code to not import foo/bar
- Run `make pkg-remove PKG=foo/bar` from top-level directory
* When you're ready to create a pull request, be sure to:
- Have test cases for the new code. If you have questions about how to do it, please ask in your pull request.
- Run `go fmt
- Run `golint`
```
$ go get github.com/golang/lint/golint
$ golint ./...
```
- Run `make verifiers`
- Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- Make sure `go test -race ./...` and `go build` completes.
* Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project
- `Minio` project is strictly conformant with Golang style
- `Minio` project is fully conformant with Golang style
- if you happen to observe offending code, please feel free to send a pull request

View File

@@ -1,8 +0,0 @@
## Contributors
<!-- DO NOT EDIT - CONTRIBUTORS.md is autogenerated from git commit log by contributors.sh script. -->
- Anand Babu (AB) Periasamy <ab@unlocksmith.org>
- Anis Elleuch <vadmeste@gmail.com>
- Frederick F. Kautz IV <fkautz@minio.io>
- Harshavardhana <harsha@minio.io>
- Matthew Farrellee <matt@cs.wisc.edu>

View File

@@ -1,34 +1,15 @@
FROM golang:1.6
FROM ubuntu:14.04
RUN mkdir -p /go/src/app
WORKDIR /go/src/app
MAINTAINER Minio Community
COPY . /go/src/app
RUN go-wrapper download
RUN go-wrapper install
ENV GOLANG_TARBALL go1.4.2.linux-amd64.tar.gz
ENV ALLOW_CONTAINER_ROOT=1
RUN mkdir -p /export/docker && cp /go/src/app/docs/Docker.md /export/docker/
ENV GOROOT /usr/local/go/
ENV GOPATH /go-workspace
ENV PATH ${GOROOT}/bin:${GOPATH}/bin/:$PATH
RUN apt-get update -y && apt-get install -y -q \
curl \
git \
build-essential \
ca-certificates \
yasm
RUN curl -O -s https://storage.googleapis.com/golang/${GOLANG_TARBALL} && \
tar -xzf ${GOLANG_TARBALL} -C ${GOROOT%*go*} && \
rm ${GOLANG_TARBALL}
ADD . ${GOPATH}/src/github.com/minio/minio
RUN cd ${GOPATH}/src/github.com/minio/minio && \
make
RUN apt-get remove -y build-essential curl git && \
apt-get -y autoremove && \
rm -rf /var/lib/apt/lists/*
EXPOSE 9000 9001
CMD ["sh", "-c", "${GOPATH}/bin/minio mode memory 2G"]
EXPOSE 9000
ENTRYPOINT ["go-wrapper", "run", "server"]
CMD ["/export"]

42
Godeps/Godeps.json generated
View File

@@ -1,42 +0,0 @@
{
"ImportPath": "github.com/minio/minio",
"GoVersion": "go1.4",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "github.com/dustin/go-humanize",
"Rev": "8cc1aaa2d955ee82833337cfb10babc42be6bce6"
},
{
"ImportPath": "github.com/gorilla/context",
"Rev": "50c25fb3b2b3b3cc724e9b6ac75fb44b3bccd0da"
},
{
"ImportPath": "github.com/gorilla/mux",
"Rev": "e444e69cbd2e2e3e0749a2f3c717cec491552bbf"
},
{
"ImportPath": "github.com/minio/check",
"Rev": "67f8c16c6c27bb03c82e41c2be533ace00035ab4"
},
{
"ImportPath": "github.com/minio/cli",
"Comment": "1.2.0-112-g823349c",
"Rev": "823349ce91e76834a4af0119d5bbc58fd4d2c6b0"
},
{
"ImportPath": "github.com/stretchr/objx",
"Rev": "cbeaeb16a013161a98496fad62933b1d21786672"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Rev": "e4ec8152c15fc46bd5056ce65997a07c7d415325"
},
{
"ImportPath": "github.com/stretchr/testify/mock",
"Rev": "e4ec8152c15fc46bd5056ce65997a07c7d415325"
}
]
}

5
Godeps/Readme generated
View File

@@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

2
Godeps/_workspace/.gitignore generated vendored
View File

@@ -1,2 +0,0 @@
/pkg
/bin

View File

@@ -1,6 +0,0 @@
#*
*.[568]
*.a
*~
[568].out
_*

View File

@@ -1,219 +0,0 @@
package humanize
import (
"math/big"
"testing"
)
func TestBigByteParsing(t *testing.T) {
tests := []struct {
in string
exp uint64
}{
{"42", 42},
{"42MB", 42000000},
{"42MiB", 44040192},
{"42mb", 42000000},
{"42mib", 44040192},
{"42MIB", 44040192},
{"42 MB", 42000000},
{"42 MiB", 44040192},
{"42 mb", 42000000},
{"42 mib", 44040192},
{"42 MIB", 44040192},
{"42.5MB", 42500000},
{"42.5MiB", 44564480},
{"42.5 MB", 42500000},
{"42.5 MiB", 44564480},
// No need to say B
{"42M", 42000000},
{"42Mi", 44040192},
{"42m", 42000000},
{"42mi", 44040192},
{"42MI", 44040192},
{"42 M", 42000000},
{"42 Mi", 44040192},
{"42 m", 42000000},
{"42 mi", 44040192},
{"42 MI", 44040192},
{"42.5M", 42500000},
{"42.5Mi", 44564480},
{"42.5 M", 42500000},
{"42.5 Mi", 44564480},
// Large testing, breaks when too much larger than
// this.
{"12.5 EB", uint64(12.5 * float64(EByte))},
{"12.5 E", uint64(12.5 * float64(EByte))},
{"12.5 EiB", uint64(12.5 * float64(EiByte))},
}
for _, p := range tests {
got, err := ParseBigBytes(p.in)
if err != nil {
t.Errorf("Couldn't parse %v: %v", p.in, err)
} else {
if got.Uint64() != p.exp {
t.Errorf("Expected %v for %v, got %v",
p.exp, p.in, got)
}
}
}
}
func TestBigByteErrors(t *testing.T) {
got, err := ParseBigBytes("84 JB")
if err == nil {
t.Errorf("Expected error, got %v", got)
}
got, err = ParseBigBytes("")
if err == nil {
t.Errorf("Expected error parsing nothing")
}
}
func bbyte(in uint64) string {
return BigBytes((&big.Int{}).SetUint64(in))
}
func bibyte(in uint64) string {
return BigIBytes((&big.Int{}).SetUint64(in))
}
func TestBigBytes(t *testing.T) {
testList{
{"bytes(0)", bbyte(0), "0B"},
{"bytes(1)", bbyte(1), "1B"},
{"bytes(803)", bbyte(803), "803B"},
{"bytes(999)", bbyte(999), "999B"},
{"bytes(1024)", bbyte(1024), "1.0KB"},
{"bytes(1MB - 1)", bbyte(MByte - Byte), "1000KB"},
{"bytes(1MB)", bbyte(1024 * 1024), "1.0MB"},
{"bytes(1GB - 1K)", bbyte(GByte - KByte), "1000MB"},
{"bytes(1GB)", bbyte(GByte), "1.0GB"},
{"bytes(1TB - 1M)", bbyte(TByte - MByte), "1000GB"},
{"bytes(1TB)", bbyte(TByte), "1.0TB"},
{"bytes(1PB - 1T)", bbyte(PByte - TByte), "999TB"},
{"bytes(1PB)", bbyte(PByte), "1.0PB"},
{"bytes(1PB - 1T)", bbyte(EByte - PByte), "999PB"},
{"bytes(1EB)", bbyte(EByte), "1.0EB"},
// Overflows.
// {"bytes(1EB - 1P)", Bytes((KByte*EByte)-PByte), "1023EB"},
{"bytes(0)", bibyte(0), "0B"},
{"bytes(1)", bibyte(1), "1B"},
{"bytes(803)", bibyte(803), "803B"},
{"bytes(1023)", bibyte(1023), "1023B"},
{"bytes(1024)", bibyte(1024), "1.0KiB"},
{"bytes(1MB - 1)", bibyte(MiByte - IByte), "1024KiB"},
{"bytes(1MB)", bibyte(1024 * 1024), "1.0MiB"},
{"bytes(1GB - 1K)", bibyte(GiByte - KiByte), "1024MiB"},
{"bytes(1GB)", bibyte(GiByte), "1.0GiB"},
{"bytes(1TB - 1M)", bibyte(TiByte - MiByte), "1024GiB"},
{"bytes(1TB)", bibyte(TiByte), "1.0TiB"},
{"bytes(1PB - 1T)", bibyte(PiByte - TiByte), "1023TiB"},
{"bytes(1PB)", bibyte(PiByte), "1.0PiB"},
{"bytes(1PB - 1T)", bibyte(EiByte - PiByte), "1023PiB"},
{"bytes(1EiB)", bibyte(EiByte), "1.0EiB"},
// Overflows.
// {"bytes(1EB - 1P)", bibyte((KIByte*EIByte)-PiByte), "1023EB"},
{"bytes(5.5GiB)", bibyte(5.5 * GiByte), "5.5GiB"},
{"bytes(5.5GB)", bbyte(5.5 * GByte), "5.5GB"},
}.validate(t)
}
func TestVeryBigBytes(t *testing.T) {
b, _ := (&big.Int{}).SetString("15347691069326346944512", 10)
s := BigBytes(b)
if s != "15ZB" {
t.Errorf("Expected 15ZB, got %v", s)
}
s = BigIBytes(b)
if s != "13ZiB" {
t.Errorf("Expected 13ZiB, got %v", s)
}
b, _ = (&big.Int{}).SetString("15716035654990179271180288", 10)
s = BigBytes(b)
if s != "16YB" {
t.Errorf("Expected 16YB, got %v", s)
}
s = BigIBytes(b)
if s != "13YiB" {
t.Errorf("Expected 13YiB, got %v", s)
}
}
func TestVeryVeryBigBytes(t *testing.T) {
b, _ := (&big.Int{}).SetString("16093220510709943573688614912", 10)
s := BigBytes(b)
if s != "16093YB" {
t.Errorf("Expected 16093YB, got %v", s)
}
s = BigIBytes(b)
if s != "13312YiB" {
t.Errorf("Expected 13312YiB, got %v", s)
}
}
func TestParseVeryBig(t *testing.T) {
tests := []struct {
in string
out string
}{
{"16ZB", "16000000000000000000000"},
{"16ZiB", "18889465931478580854784"},
{"16.5ZB", "16500000000000000000000"},
{"16.5ZiB", "19479761741837286506496"},
{"16Z", "16000000000000000000000"},
{"16Zi", "18889465931478580854784"},
{"16.5Z", "16500000000000000000000"},
{"16.5Zi", "19479761741837286506496"},
{"16YB", "16000000000000000000000000"},
{"16YiB", "19342813113834066795298816"},
{"16.5YB", "16500000000000000000000000"},
{"16.5YiB", "19947276023641381382651904"},
{"16Y", "16000000000000000000000000"},
{"16Yi", "19342813113834066795298816"},
{"16.5Y", "16500000000000000000000000"},
{"16.5Yi", "19947276023641381382651904"},
}
for _, test := range tests {
x, err := ParseBigBytes(test.in)
if err != nil {
t.Errorf("Error parsing %q: %v", test.in, err)
continue
}
if x.String() != test.out {
t.Errorf("Expected %q for %q, got %v", test.out, test.in, x)
}
}
}
func BenchmarkParseBigBytes(b *testing.B) {
for i := 0; i < b.N; i++ {
ParseBigBytes("16.5Z")
}
}
func BenchmarkBigBytes(b *testing.B) {
for i := 0; i < b.N; i++ {
bibyte(16.5 * GByte)
}
}

View File

@@ -1,144 +0,0 @@
package humanize
import (
"testing"
)
func TestByteParsing(t *testing.T) {
tests := []struct {
in string
exp uint64
}{
{"42", 42},
{"42MB", 42000000},
{"42MiB", 44040192},
{"42mb", 42000000},
{"42mib", 44040192},
{"42MIB", 44040192},
{"42 MB", 42000000},
{"42 MiB", 44040192},
{"42 mb", 42000000},
{"42 mib", 44040192},
{"42 MIB", 44040192},
{"42.5MB", 42500000},
{"42.5MiB", 44564480},
{"42.5 MB", 42500000},
{"42.5 MiB", 44564480},
// No need to say B
{"42M", 42000000},
{"42Mi", 44040192},
{"42m", 42000000},
{"42mi", 44040192},
{"42MI", 44040192},
{"42 M", 42000000},
{"42 Mi", 44040192},
{"42 m", 42000000},
{"42 mi", 44040192},
{"42 MI", 44040192},
{"42.5M", 42500000},
{"42.5Mi", 44564480},
{"42.5 M", 42500000},
{"42.5 Mi", 44564480},
// Large testing, breaks when too much larger than
// this.
{"12.5 EB", uint64(12.5 * float64(EByte))},
{"12.5 E", uint64(12.5 * float64(EByte))},
{"12.5 EiB", uint64(12.5 * float64(EiByte))},
}
for _, p := range tests {
got, err := ParseBytes(p.in)
if err != nil {
t.Errorf("Couldn't parse %v: %v", p.in, err)
}
if got != p.exp {
t.Errorf("Expected %v for %v, got %v",
p.exp, p.in, got)
}
}
}
func TestByteErrors(t *testing.T) {
got, err := ParseBytes("84 JB")
if err == nil {
t.Errorf("Expected error, got %v", got)
}
got, err = ParseBytes("")
if err == nil {
t.Errorf("Expected error parsing nothing")
}
got, err = ParseBytes("16 EiB")
if err == nil {
t.Errorf("Expected error, got %v", got)
}
}
func TestBytes(t *testing.T) {
testList{
{"bytes(0)", Bytes(0), "0B"},
{"bytes(1)", Bytes(1), "1B"},
{"bytes(803)", Bytes(803), "803B"},
{"bytes(999)", Bytes(999), "999B"},
{"bytes(1024)", Bytes(1024), "1.0KB"},
{"bytes(9999)", Bytes(9999), "10KB"},
{"bytes(1MB - 1)", Bytes(MByte - Byte), "1000KB"},
{"bytes(1MB)", Bytes(1024 * 1024), "1.0MB"},
{"bytes(1GB - 1K)", Bytes(GByte - KByte), "1000MB"},
{"bytes(1GB)", Bytes(GByte), "1.0GB"},
{"bytes(1TB - 1M)", Bytes(TByte - MByte), "1000GB"},
{"bytes(10MB)", Bytes(9999 * 1000), "10MB"},
{"bytes(1TB)", Bytes(TByte), "1.0TB"},
{"bytes(1PB - 1T)", Bytes(PByte - TByte), "999TB"},
{"bytes(1PB)", Bytes(PByte), "1.0PB"},
{"bytes(1PB - 1T)", Bytes(EByte - PByte), "999PB"},
{"bytes(1EB)", Bytes(EByte), "1.0EB"},
// Overflows.
// {"bytes(1EB - 1P)", Bytes((KByte*EByte)-PByte), "1023EB"},
{"bytes(0)", IBytes(0), "0B"},
{"bytes(1)", IBytes(1), "1B"},
{"bytes(803)", IBytes(803), "803B"},
{"bytes(1023)", IBytes(1023), "1023B"},
{"bytes(1024)", IBytes(1024), "1.0KiB"},
{"bytes(1MB - 1)", IBytes(MiByte - IByte), "1024KiB"},
{"bytes(1MB)", IBytes(1024 * 1024), "1.0MiB"},
{"bytes(1GB - 1K)", IBytes(GiByte - KiByte), "1024MiB"},
{"bytes(1GB)", IBytes(GiByte), "1.0GiB"},
{"bytes(1TB - 1M)", IBytes(TiByte - MiByte), "1024GiB"},
{"bytes(1TB)", IBytes(TiByte), "1.0TiB"},
{"bytes(1PB - 1T)", IBytes(PiByte - TiByte), "1023TiB"},
{"bytes(1PB)", IBytes(PiByte), "1.0PiB"},
{"bytes(1PB - 1T)", IBytes(EiByte - PiByte), "1023PiB"},
{"bytes(1EiB)", IBytes(EiByte), "1.0EiB"},
// Overflows.
// {"bytes(1EB - 1P)", IBytes((KIByte*EIByte)-PiByte), "1023EB"},
{"bytes(5.5GiB)", IBytes(5.5 * GiByte), "5.5GiB"},
{"bytes(5.5GB)", Bytes(5.5 * GByte), "5.5GB"},
}.validate(t)
}
func BenchmarkParseBytes(b *testing.B) {
for i := 0; i < b.N; i++ {
ParseBytes("16.5GB")
}
}
func BenchmarkBytes(b *testing.B) {
for i := 0; i < b.N; i++ {
Bytes(16.5 * GByte)
}
}

View File

@@ -1,134 +0,0 @@
package humanize
import (
"math"
"math/big"
"testing"
)
func TestCommas(t *testing.T) {
testList{
{"0", Comma(0), "0"},
{"10", Comma(10), "10"},
{"100", Comma(100), "100"},
{"1,000", Comma(1000), "1,000"},
{"10,000", Comma(10000), "10,000"},
{"100,000", Comma(100000), "100,000"},
{"10,000,000", Comma(10000000), "10,000,000"},
{"10,100,000", Comma(10100000), "10,100,000"},
{"10,010,000", Comma(10010000), "10,010,000"},
{"10,001,000", Comma(10001000), "10,001,000"},
{"123,456,789", Comma(123456789), "123,456,789"},
{"maxint", Comma(9.223372e+18), "9,223,372,000,000,000,000"},
{"minint", Comma(-9.223372e+18), "-9,223,372,000,000,000,000"},
{"-123,456,789", Comma(-123456789), "-123,456,789"},
{"-10,100,000", Comma(-10100000), "-10,100,000"},
{"-10,010,000", Comma(-10010000), "-10,010,000"},
{"-10,001,000", Comma(-10001000), "-10,001,000"},
{"-10,000,000", Comma(-10000000), "-10,000,000"},
{"-100,000", Comma(-100000), "-100,000"},
{"-10,000", Comma(-10000), "-10,000"},
{"-1,000", Comma(-1000), "-1,000"},
{"-100", Comma(-100), "-100"},
{"-10", Comma(-10), "-10"},
}.validate(t)
}
func TestCommafs(t *testing.T) {
testList{
{"0", Commaf(0), "0"},
{"10.11", Commaf(10.11), "10.11"},
{"100", Commaf(100), "100"},
{"1,000", Commaf(1000), "1,000"},
{"10,000", Commaf(10000), "10,000"},
{"100,000", Commaf(100000), "100,000"},
{"834,142.32", Commaf(834142.32), "834,142.32"},
{"10,000,000", Commaf(10000000), "10,000,000"},
{"10,100,000", Commaf(10100000), "10,100,000"},
{"10,010,000", Commaf(10010000), "10,010,000"},
{"10,001,000", Commaf(10001000), "10,001,000"},
{"123,456,789", Commaf(123456789), "123,456,789"},
{"maxf64", Commaf(math.MaxFloat64), "179,769,313,486,231,570,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000"},
{"minf64", Commaf(math.SmallestNonzeroFloat64), "0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005"},
{"-123,456,789", Commaf(-123456789), "-123,456,789"},
{"-10,100,000", Commaf(-10100000), "-10,100,000"},
{"-10,010,000", Commaf(-10010000), "-10,010,000"},
{"-10,001,000", Commaf(-10001000), "-10,001,000"},
{"-10,000,000", Commaf(-10000000), "-10,000,000"},
{"-100,000", Commaf(-100000), "-100,000"},
{"-10,000", Commaf(-10000), "-10,000"},
{"-1,000", Commaf(-1000), "-1,000"},
{"-100.11", Commaf(-100.11), "-100.11"},
{"-10", Commaf(-10), "-10"},
}.validate(t)
}
func BenchmarkCommas(b *testing.B) {
for i := 0; i < b.N; i++ {
Comma(1234567890)
}
}
func BenchmarkCommaf(b *testing.B) {
for i := 0; i < b.N; i++ {
Commaf(1234567890.83584)
}
}
func BenchmarkBigCommas(b *testing.B) {
for i := 0; i < b.N; i++ {
BigComma(big.NewInt(1234567890))
}
}
func bigComma(i int64) string {
return BigComma(big.NewInt(i))
}
func TestBigCommas(t *testing.T) {
testList{
{"0", bigComma(0), "0"},
{"10", bigComma(10), "10"},
{"100", bigComma(100), "100"},
{"1,000", bigComma(1000), "1,000"},
{"10,000", bigComma(10000), "10,000"},
{"100,000", bigComma(100000), "100,000"},
{"10,000,000", bigComma(10000000), "10,000,000"},
{"10,100,000", bigComma(10100000), "10,100,000"},
{"10,010,000", bigComma(10010000), "10,010,000"},
{"10,001,000", bigComma(10001000), "10,001,000"},
{"123,456,789", bigComma(123456789), "123,456,789"},
{"maxint", bigComma(9.223372e+18), "9,223,372,000,000,000,000"},
{"minint", bigComma(-9.223372e+18), "-9,223,372,000,000,000,000"},
{"-123,456,789", bigComma(-123456789), "-123,456,789"},
{"-10,100,000", bigComma(-10100000), "-10,100,000"},
{"-10,010,000", bigComma(-10010000), "-10,010,000"},
{"-10,001,000", bigComma(-10001000), "-10,001,000"},
{"-10,000,000", bigComma(-10000000), "-10,000,000"},
{"-100,000", bigComma(-100000), "-100,000"},
{"-10,000", bigComma(-10000), "-10,000"},
{"-1,000", bigComma(-1000), "-1,000"},
{"-100", bigComma(-100), "-100"},
{"-10", bigComma(-10), "-10"},
}.validate(t)
}
func TestVeryBigCommas(t *testing.T) {
tests := []struct{ in, exp string }{
{
"84889279597249724975972597249849757294578485",
"84,889,279,597,249,724,975,972,597,249,849,757,294,578,485",
},
{
"-84889279597249724975972597249849757294578485",
"-84,889,279,597,249,724,975,972,597,249,849,757,294,578,485",
},
}
for _, test := range tests {
n, _ := (&big.Int{}).SetString(test.in, 10)
got := BigComma(n)
if test.exp != got {
t.Errorf("Expected %q, got %q", test.exp, got)
}
}
}

View File

@@ -1,18 +0,0 @@
package humanize
import (
"testing"
)
type testList []struct {
name, got, exp string
}
func (tl testList) validate(t *testing.T) {
for _, test := range tl {
if test.got != test.exp {
t.Errorf("On %v, expected '%v', but got '%v'",
test.name, test.exp, test.got)
}
}
}

View File

@@ -1,55 +0,0 @@
package humanize
import (
"fmt"
"regexp"
"strconv"
"testing"
)
func TestFtoa(t *testing.T) {
testList{
{"200", Ftoa(200), "200"},
{"2", Ftoa(2), "2"},
{"2.2", Ftoa(2.2), "2.2"},
{"2.02", Ftoa(2.02), "2.02"},
{"200.02", Ftoa(200.02), "200.02"},
}.validate(t)
}
func BenchmarkFtoaRegexTrailing(b *testing.B) {
trailingZerosRegex := regexp.MustCompile(`\.?0+$`)
b.ResetTimer()
for i := 0; i < b.N; i++ {
trailingZerosRegex.ReplaceAllString("2.00000", "")
trailingZerosRegex.ReplaceAllString("2.0000", "")
trailingZerosRegex.ReplaceAllString("2.000", "")
trailingZerosRegex.ReplaceAllString("2.00", "")
trailingZerosRegex.ReplaceAllString("2.0", "")
trailingZerosRegex.ReplaceAllString("2", "")
}
}
func BenchmarkFtoaFunc(b *testing.B) {
for i := 0; i < b.N; i++ {
stripTrailingZeros("2.00000")
stripTrailingZeros("2.0000")
stripTrailingZeros("2.000")
stripTrailingZeros("2.00")
stripTrailingZeros("2.0")
stripTrailingZeros("2")
}
}
func BenchmarkFmtF(b *testing.B) {
for i := 0; i < b.N; i++ {
fmt.Sprintf("%f", 2.03584)
}
}
func BenchmarkStrconvF(b *testing.B) {
for i := 0; i < b.N; i++ {
strconv.FormatFloat(2.03584, 'f', 6, 64)
}
}

View File

@@ -1,22 +0,0 @@
package humanize
import (
"testing"
)
func TestOrdinals(t *testing.T) {
testList{
{"0", Ordinal(0), "0th"},
{"1", Ordinal(1), "1st"},
{"2", Ordinal(2), "2nd"},
{"3", Ordinal(3), "3rd"},
{"4", Ordinal(4), "4th"},
{"10", Ordinal(10), "10th"},
{"11", Ordinal(11), "11th"},
{"12", Ordinal(12), "12th"},
{"13", Ordinal(13), "13th"},
{"101", Ordinal(101), "101st"},
{"102", Ordinal(102), "102nd"},
{"103", Ordinal(103), "103rd"},
}.validate(t)
}

View File

@@ -1,98 +0,0 @@
package humanize
import (
"math"
"testing"
)
func TestSI(t *testing.T) {
tests := []struct {
name string
num float64
formatted string
}{
{"e-24", 1e-24, "1yF"},
{"e-21", 1e-21, "1zF"},
{"e-18", 1e-18, "1aF"},
{"e-15", 1e-15, "1fF"},
{"e-12", 1e-12, "1pF"},
{"e-12", 2.2345e-12, "2.2345pF"},
{"e-12", 2.23e-12, "2.23pF"},
{"e-11", 2.23e-11, "22.3pF"},
{"e-10", 2.2e-10, "220pF"},
{"e-9", 2.2e-9, "2.2nF"},
{"e-8", 2.2e-8, "22nF"},
{"e-7", 2.2e-7, "220nF"},
{"e-6", 2.2e-6, "2.2µF"},
{"e-6", 1e-6, "1µF"},
{"e-5", 2.2e-5, "22µF"},
{"e-4", 2.2e-4, "220µF"},
{"e-3", 2.2e-3, "2.2mF"},
{"e-2", 2.2e-2, "22mF"},
{"e-1", 2.2e-1, "220mF"},
{"e+0", 2.2e-0, "2.2F"},
{"e+0", 2.2, "2.2F"},
{"e+1", 2.2e+1, "22F"},
{"0", 0, "0F"},
{"e+1", 22, "22F"},
{"e+2", 2.2e+2, "220F"},
{"e+2", 220, "220F"},
{"e+3", 2.2e+3, "2.2kF"},
{"e+3", 2200, "2.2kF"},
{"e+4", 2.2e+4, "22kF"},
{"e+4", 22000, "22kF"},
{"e+5", 2.2e+5, "220kF"},
{"e+6", 2.2e+6, "2.2MF"},
{"e+6", 1e+6, "1MF"},
{"e+7", 2.2e+7, "22MF"},
{"e+8", 2.2e+8, "220MF"},
{"e+9", 2.2e+9, "2.2GF"},
{"e+10", 2.2e+10, "22GF"},
{"e+11", 2.2e+11, "220GF"},
{"e+12", 2.2e+12, "2.2TF"},
{"e+15", 2.2e+15, "2.2PF"},
{"e+18", 2.2e+18, "2.2EF"},
{"e+21", 2.2e+21, "2.2ZF"},
{"e+24", 2.2e+24, "2.2YF"},
// special case
{"1F", 1000 * 1000, "1MF"},
{"1F", 1e6, "1MF"},
}
for _, test := range tests {
got := SI(test.num, "F")
if got != test.formatted {
t.Errorf("On %v (%v), got %v, wanted %v",
test.name, test.num, got, test.formatted)
}
gotf, gotu, err := ParseSI(test.formatted)
if err != nil {
t.Errorf("Error parsing %v (%v): %v", test.name, test.formatted, err)
continue
}
if math.Abs(1-(gotf/test.num)) > 0.01 {
t.Errorf("On %v (%v), got %v, wanted %v (±%v)",
test.name, test.formatted, gotf, test.num,
math.Abs(1-(gotf/test.num)))
}
if gotu != "F" {
t.Errorf("On %v (%v), expected unit F, got %v",
test.name, test.formatted, gotu)
}
}
// Parse error
gotf, gotu, err := ParseSI("x1.21JW") // 1.21 jigga whats
if err == nil {
t.Errorf("Expected error on x1.21JW, got %v %v", gotf, gotu)
}
}
func BenchmarkParseSI(b *testing.B) {
for i := 0; i < b.N; i++ {
ParseSI("2.2346ZB")
}
}

View File

@@ -1,71 +0,0 @@
package humanize
import (
"math"
"testing"
"time"
)
func TestPast(t *testing.T) {
now := time.Now().Unix()
testList{
{"now", Time(time.Unix(now, 0)), "now"},
{"1 second ago", Time(time.Unix(now-1, 0)), "1 second ago"},
{"12 seconds ago", Time(time.Unix(now-12, 0)), "12 seconds ago"},
{"30 seconds ago", Time(time.Unix(now-30, 0)), "30 seconds ago"},
{"45 seconds ago", Time(time.Unix(now-45, 0)), "45 seconds ago"},
{"1 minute ago", Time(time.Unix(now-63, 0)), "1 minute ago"},
{"15 minutes ago", Time(time.Unix(now-15*Minute, 0)), "15 minutes ago"},
{"1 hour ago", Time(time.Unix(now-63*Minute, 0)), "1 hour ago"},
{"2 hours ago", Time(time.Unix(now-2*Hour, 0)), "2 hours ago"},
{"21 hours ago", Time(time.Unix(now-21*Hour, 0)), "21 hours ago"},
{"1 day ago", Time(time.Unix(now-26*Hour, 0)), "1 day ago"},
{"2 days ago", Time(time.Unix(now-49*Hour, 0)), "2 days ago"},
{"3 days ago", Time(time.Unix(now-3*Day, 0)), "3 days ago"},
{"1 week ago (1)", Time(time.Unix(now-7*Day, 0)), "1 week ago"},
{"1 week ago (2)", Time(time.Unix(now-12*Day, 0)), "1 week ago"},
{"2 weeks ago", Time(time.Unix(now-15*Day, 0)), "2 weeks ago"},
{"1 month ago", Time(time.Unix(now-39*Day, 0)), "1 month ago"},
{"3 months ago", Time(time.Unix(now-99*Day, 0)), "3 months ago"},
{"1 year ago (1)", Time(time.Unix(now-365*Day, 0)), "1 year ago"},
{"1 year ago (1)", Time(time.Unix(now-400*Day, 0)), "1 year ago"},
{"2 years ago (1)", Time(time.Unix(now-548*Day, 0)), "2 years ago"},
{"2 years ago (2)", Time(time.Unix(now-725*Day, 0)), "2 years ago"},
{"2 years ago (3)", Time(time.Unix(now-800*Day, 0)), "2 years ago"},
{"3 years ago", Time(time.Unix(now-3*Year, 0)), "3 years ago"},
{"long ago", Time(time.Unix(now-LongTime, 0)), "a long while ago"},
}.validate(t)
}
func TestFuture(t *testing.T) {
now := time.Now().Unix()
testList{
{"now", Time(time.Unix(now, 0)), "now"},
{"1 second from now", Time(time.Unix(now+1, 0)), "1 second from now"},
{"12 seconds from now", Time(time.Unix(now+12, 0)), "12 seconds from now"},
{"30 seconds from now", Time(time.Unix(now+30, 0)), "30 seconds from now"},
{"45 seconds from now", Time(time.Unix(now+45, 0)), "45 seconds from now"},
{"15 minutes from now", Time(time.Unix(now+15*Minute, 0)), "15 minutes from now"},
{"2 hours from now", Time(time.Unix(now+2*Hour, 0)), "2 hours from now"},
{"21 hours from now", Time(time.Unix(now+21*Hour, 0)), "21 hours from now"},
{"1 day from now", Time(time.Unix(now+26*Hour, 0)), "1 day from now"},
{"2 days from now", Time(time.Unix(now+49*Hour, 0)), "2 days from now"},
{"3 days from now", Time(time.Unix(now+3*Day, 0)), "3 days from now"},
{"1 week from now (1)", Time(time.Unix(now+7*Day, 0)), "1 week from now"},
{"1 week from now (2)", Time(time.Unix(now+12*Day, 0)), "1 week from now"},
{"2 weeks from now", Time(time.Unix(now+15*Day, 0)), "2 weeks from now"},
{"1 month from now", Time(time.Unix(now+30*Day, 0)), "1 month from now"},
{"1 year from now", Time(time.Unix(now+365*Day, 0)), "1 year from now"},
{"2 years from now", Time(time.Unix(now+2*Year, 0)), "2 years from now"},
{"a while from now", Time(time.Unix(now+LongTime, 0)), "a long while from now"},
}.validate(t)
}
func TestRange(t *testing.T) {
start := time.Time{}
end := time.Unix(math.MaxInt64, math.MaxInt64)
x := RelTime(start, end, "ago", "from now")
if x != "a long while from now" {
t.Errorf("Expected a long while from now, got %q", x)
}
}

View File

@@ -1,7 +0,0 @@
language: go
go:
- 1.0
- 1.1
- 1.2
- tip

View File

@@ -1,161 +0,0 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package context
import (
"net/http"
"testing"
)
type keyType int
const (
key1 keyType = iota
key2
)
func TestContext(t *testing.T) {
assertEqual := func(val interface{}, exp interface{}) {
if val != exp {
t.Errorf("Expected %v, got %v.", exp, val)
}
}
r, _ := http.NewRequest("GET", "http://localhost:8080/", nil)
emptyR, _ := http.NewRequest("GET", "http://localhost:8080/", nil)
// Get()
assertEqual(Get(r, key1), nil)
// Set()
Set(r, key1, "1")
assertEqual(Get(r, key1), "1")
assertEqual(len(data[r]), 1)
Set(r, key2, "2")
assertEqual(Get(r, key2), "2")
assertEqual(len(data[r]), 2)
//GetOk
value, ok := GetOk(r, key1)
assertEqual(value, "1")
assertEqual(ok, true)
value, ok = GetOk(r, "not exists")
assertEqual(value, nil)
assertEqual(ok, false)
Set(r, "nil value", nil)
value, ok = GetOk(r, "nil value")
assertEqual(value, nil)
assertEqual(ok, true)
// GetAll()
values := GetAll(r)
assertEqual(len(values), 3)
// GetAll() for empty request
values = GetAll(emptyR)
if values != nil {
t.Error("GetAll didn't return nil value for invalid request")
}
// GetAllOk()
values, ok = GetAllOk(r)
assertEqual(len(values), 3)
assertEqual(ok, true)
// GetAllOk() for empty request
values, ok = GetAllOk(emptyR)
assertEqual(value, nil)
assertEqual(ok, false)
// Delete()
Delete(r, key1)
assertEqual(Get(r, key1), nil)
assertEqual(len(data[r]), 2)
Delete(r, key2)
assertEqual(Get(r, key2), nil)
assertEqual(len(data[r]), 1)
// Clear()
Clear(r)
assertEqual(len(data), 0)
}
func parallelReader(r *http.Request, key string, iterations int, wait, done chan struct{}) {
<-wait
for i := 0; i < iterations; i++ {
Get(r, key)
}
done <- struct{}{}
}
func parallelWriter(r *http.Request, key, value string, iterations int, wait, done chan struct{}) {
<-wait
for i := 0; i < iterations; i++ {
Set(r, key, value)
}
done <- struct{}{}
}
func benchmarkMutex(b *testing.B, numReaders, numWriters, iterations int) {
b.StopTimer()
r, _ := http.NewRequest("GET", "http://localhost:8080/", nil)
done := make(chan struct{})
b.StartTimer()
for i := 0; i < b.N; i++ {
wait := make(chan struct{})
for i := 0; i < numReaders; i++ {
go parallelReader(r, "test", iterations, wait, done)
}
for i := 0; i < numWriters; i++ {
go parallelWriter(r, "test", "123", iterations, wait, done)
}
close(wait)
for i := 0; i < numReaders+numWriters; i++ {
<-done
}
}
}
func BenchmarkMutexSameReadWrite1(b *testing.B) {
benchmarkMutex(b, 1, 1, 32)
}
func BenchmarkMutexSameReadWrite2(b *testing.B) {
benchmarkMutex(b, 2, 2, 32)
}
func BenchmarkMutexSameReadWrite4(b *testing.B) {
benchmarkMutex(b, 4, 4, 32)
}
func BenchmarkMutex1(b *testing.B) {
benchmarkMutex(b, 2, 8, 32)
}
func BenchmarkMutex2(b *testing.B) {
benchmarkMutex(b, 16, 4, 64)
}
func BenchmarkMutex3(b *testing.B) {
benchmarkMutex(b, 1, 2, 128)
}
func BenchmarkMutex4(b *testing.B) {
benchmarkMutex(b, 128, 32, 256)
}
func BenchmarkMutex5(b *testing.B) {
benchmarkMutex(b, 1024, 2048, 64)
}
func BenchmarkMutex6(b *testing.B) {
benchmarkMutex(b, 2048, 1024, 512)
}

View File

@@ -1,7 +0,0 @@
language: go
go:
- 1.0
- 1.1
- 1.2
- tip

View File

@@ -1,7 +0,0 @@
mux
===
[![Build Status](https://travis-ci.org/gorilla/mux.png?branch=master)](https://travis-ci.org/gorilla/mux)
gorilla/mux is a powerful URL router and dispatcher.
Read the full documentation here: http://www.gorillatoolkit.org/pkg/mux

View File

@@ -1,21 +0,0 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"net/http"
"testing"
)
func BenchmarkMux(b *testing.B) {
router := new(Router)
handler := func(w http.ResponseWriter, r *http.Request) {}
router.HandleFunc("/v1/{v1}", handler)
request, _ := http.NewRequest("GET", "/v1/anything", nil)
for i := 0; i < b.N; i++ {
router.ServeHTTP(nil, request)
}
}

View File

@@ -1,943 +0,0 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"fmt"
"net/http"
"testing"
"github.com/gorilla/context"
)
type routeTest struct {
title string // title of the test
route *Route // the route being tested
request *http.Request // a request to test the route
vars map[string]string // the expected vars of the match
host string // the expected host of the match
path string // the expected path of the match
shouldMatch bool // whether the request is expected to match the route at all
shouldRedirect bool // whether the request should result in a redirect
}
func TestHost(t *testing.T) {
// newRequestHost a new request with a method, url, and host header
newRequestHost := func(method, url, host string) *http.Request {
req, err := http.NewRequest(method, url, nil)
if err != nil {
panic(err)
}
req.Host = host
return req
}
tests := []routeTest{
{
title: "Host route match",
route: new(Route).Host("aaa.bbb.ccc"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: true,
},
{
title: "Host route, wrong host in request URL",
route: new(Route).Host("aaa.bbb.ccc"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: false,
},
{
title: "Host route with port, match",
route: new(Route).Host("aaa.bbb.ccc:1234"),
request: newRequest("GET", "http://aaa.bbb.ccc:1234/111/222/333"),
vars: map[string]string{},
host: "aaa.bbb.ccc:1234",
path: "",
shouldMatch: true,
},
{
title: "Host route with port, wrong port in request URL",
route: new(Route).Host("aaa.bbb.ccc:1234"),
request: newRequest("GET", "http://aaa.bbb.ccc:9999/111/222/333"),
vars: map[string]string{},
host: "aaa.bbb.ccc:1234",
path: "",
shouldMatch: false,
},
{
title: "Host route, match with host in request header",
route: new(Route).Host("aaa.bbb.ccc"),
request: newRequestHost("GET", "/111/222/333", "aaa.bbb.ccc"),
vars: map[string]string{},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: true,
},
{
title: "Host route, wrong host in request header",
route: new(Route).Host("aaa.bbb.ccc"),
request: newRequestHost("GET", "/111/222/333", "aaa.222.ccc"),
vars: map[string]string{},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: false,
},
// BUG {new(Route).Host("aaa.bbb.ccc:1234"), newRequestHost("GET", "/111/222/333", "aaa.bbb.ccc:1234"), map[string]string{}, "aaa.bbb.ccc:1234", "", true},
{
title: "Host route with port, wrong host in request header",
route: new(Route).Host("aaa.bbb.ccc:1234"),
request: newRequestHost("GET", "/111/222/333", "aaa.bbb.ccc:9999"),
vars: map[string]string{},
host: "aaa.bbb.ccc:1234",
path: "",
shouldMatch: false,
},
{
title: "Host route with pattern, match",
route: new(Route).Host("aaa.{v1:[a-z]{3}}.ccc"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{"v1": "bbb"},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: true,
},
{
title: "Host route with pattern, wrong host in request URL",
route: new(Route).Host("aaa.{v1:[a-z]{3}}.ccc"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{"v1": "bbb"},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: false,
},
{
title: "Host route with multiple patterns, match",
route: new(Route).Host("{v1:[a-z]{3}}.{v2:[a-z]{3}}.{v3:[a-z]{3}}"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{"v1": "aaa", "v2": "bbb", "v3": "ccc"},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: true,
},
{
title: "Host route with multiple patterns, wrong host in request URL",
route: new(Route).Host("{v1:[a-z]{3}}.{v2:[a-z]{3}}.{v3:[a-z]{3}}"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{"v1": "aaa", "v2": "bbb", "v3": "ccc"},
host: "aaa.bbb.ccc",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestPath(t *testing.T) {
tests := []routeTest{
{
title: "Path route, match",
route: new(Route).Path("/111/222/333"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{},
host: "",
path: "/111/222/333",
shouldMatch: true,
},
{
title: "Path route, match with trailing slash in request and path",
route: new(Route).Path("/111/"),
request: newRequest("GET", "http://localhost/111/"),
vars: map[string]string{},
host: "",
path: "/111/",
shouldMatch: true,
},
{
title: "Path route, do not match with trailing slash in path",
route: new(Route).Path("/111/"),
request: newRequest("GET", "http://localhost/111"),
vars: map[string]string{},
host: "",
path: "/111",
shouldMatch: false,
},
{
title: "Path route, do not match with trailing slash in request",
route: new(Route).Path("/111"),
request: newRequest("GET", "http://localhost/111/"),
vars: map[string]string{},
host: "",
path: "/111/",
shouldMatch: false,
},
{
title: "Path route, wrong path in request in request URL",
route: new(Route).Path("/111/222/333"),
request: newRequest("GET", "http://localhost/1/2/3"),
vars: map[string]string{},
host: "",
path: "/111/222/333",
shouldMatch: false,
},
{
title: "Path route with pattern, match",
route: new(Route).Path("/111/{v1:[0-9]{3}}/333"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{"v1": "222"},
host: "",
path: "/111/222/333",
shouldMatch: true,
},
{
title: "Path route with pattern, URL in request does not match",
route: new(Route).Path("/111/{v1:[0-9]{3}}/333"),
request: newRequest("GET", "http://localhost/111/aaa/333"),
vars: map[string]string{"v1": "222"},
host: "",
path: "/111/222/333",
shouldMatch: false,
},
{
title: "Path route with multiple patterns, match",
route: new(Route).Path("/{v1:[0-9]{3}}/{v2:[0-9]{3}}/{v3:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{"v1": "111", "v2": "222", "v3": "333"},
host: "",
path: "/111/222/333",
shouldMatch: true,
},
{
title: "Path route with multiple patterns, URL in request does not match",
route: new(Route).Path("/{v1:[0-9]{3}}/{v2:[0-9]{3}}/{v3:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/aaa/333"),
vars: map[string]string{"v1": "111", "v2": "222", "v3": "333"},
host: "",
path: "/111/222/333",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestPathPrefix(t *testing.T) {
tests := []routeTest{
{
title: "PathPrefix route, match",
route: new(Route).PathPrefix("/111"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{},
host: "",
path: "/111",
shouldMatch: true,
},
{
title: "PathPrefix route, match substring",
route: new(Route).PathPrefix("/1"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{},
host: "",
path: "/1",
shouldMatch: true,
},
{
title: "PathPrefix route, URL prefix in request does not match",
route: new(Route).PathPrefix("/111"),
request: newRequest("GET", "http://localhost/1/2/3"),
vars: map[string]string{},
host: "",
path: "/111",
shouldMatch: false,
},
{
title: "PathPrefix route with pattern, match",
route: new(Route).PathPrefix("/111/{v1:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{"v1": "222"},
host: "",
path: "/111/222",
shouldMatch: true,
},
{
title: "PathPrefix route with pattern, URL prefix in request does not match",
route: new(Route).PathPrefix("/111/{v1:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/aaa/333"),
vars: map[string]string{"v1": "222"},
host: "",
path: "/111/222",
shouldMatch: false,
},
{
title: "PathPrefix route with multiple patterns, match",
route: new(Route).PathPrefix("/{v1:[0-9]{3}}/{v2:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/222/333"),
vars: map[string]string{"v1": "111", "v2": "222"},
host: "",
path: "/111/222",
shouldMatch: true,
},
{
title: "PathPrefix route with multiple patterns, URL prefix in request does not match",
route: new(Route).PathPrefix("/{v1:[0-9]{3}}/{v2:[0-9]{3}}"),
request: newRequest("GET", "http://localhost/111/aaa/333"),
vars: map[string]string{"v1": "111", "v2": "222"},
host: "",
path: "/111/222",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestHostPath(t *testing.T) {
tests := []routeTest{
{
title: "Host and Path route, match",
route: new(Route).Host("aaa.bbb.ccc").Path("/111/222/333"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Host and Path route, wrong host in request URL",
route: new(Route).Host("aaa.bbb.ccc").Path("/111/222/333"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
{
title: "Host and Path route with pattern, match",
route: new(Route).Host("aaa.{v1:[a-z]{3}}.ccc").Path("/111/{v2:[0-9]{3}}/333"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{"v1": "bbb", "v2": "222"},
host: "aaa.bbb.ccc",
path: "/111/222/333",
shouldMatch: true,
},
{
title: "Host and Path route with pattern, URL in request does not match",
route: new(Route).Host("aaa.{v1:[a-z]{3}}.ccc").Path("/111/{v2:[0-9]{3}}/333"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{"v1": "bbb", "v2": "222"},
host: "aaa.bbb.ccc",
path: "/111/222/333",
shouldMatch: false,
},
{
title: "Host and Path route with multiple patterns, match",
route: new(Route).Host("{v1:[a-z]{3}}.{v2:[a-z]{3}}.{v3:[a-z]{3}}").Path("/{v4:[0-9]{3}}/{v5:[0-9]{3}}/{v6:[0-9]{3}}"),
request: newRequest("GET", "http://aaa.bbb.ccc/111/222/333"),
vars: map[string]string{"v1": "aaa", "v2": "bbb", "v3": "ccc", "v4": "111", "v5": "222", "v6": "333"},
host: "aaa.bbb.ccc",
path: "/111/222/333",
shouldMatch: true,
},
{
title: "Host and Path route with multiple patterns, URL in request does not match",
route: new(Route).Host("{v1:[a-z]{3}}.{v2:[a-z]{3}}.{v3:[a-z]{3}}").Path("/{v4:[0-9]{3}}/{v5:[0-9]{3}}/{v6:[0-9]{3}}"),
request: newRequest("GET", "http://aaa.222.ccc/111/222/333"),
vars: map[string]string{"v1": "aaa", "v2": "bbb", "v3": "ccc", "v4": "111", "v5": "222", "v6": "333"},
host: "aaa.bbb.ccc",
path: "/111/222/333",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestHeaders(t *testing.T) {
// newRequestHeaders creates a new request with a method, url, and headers
newRequestHeaders := func(method, url string, headers map[string]string) *http.Request {
req, err := http.NewRequest(method, url, nil)
if err != nil {
panic(err)
}
for k, v := range headers {
req.Header.Add(k, v)
}
return req
}
tests := []routeTest{
{
title: "Headers route, match",
route: new(Route).Headers("foo", "bar", "baz", "ding"),
request: newRequestHeaders("GET", "http://localhost", map[string]string{"foo": "bar", "baz": "ding"}),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Headers route, bad header values",
route: new(Route).Headers("foo", "bar", "baz", "ding"),
request: newRequestHeaders("GET", "http://localhost", map[string]string{"foo": "bar", "baz": "dong"}),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestMethods(t *testing.T) {
tests := []routeTest{
{
title: "Methods route, match GET",
route: new(Route).Methods("GET", "POST"),
request: newRequest("GET", "http://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Methods route, match POST",
route: new(Route).Methods("GET", "POST"),
request: newRequest("POST", "http://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Methods route, bad method",
route: new(Route).Methods("GET", "POST"),
request: newRequest("PUT", "http://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestQueries(t *testing.T) {
tests := []routeTest{
{
title: "Queries route, match",
route: new(Route).Queries("foo", "bar", "baz", "ding"),
request: newRequest("GET", "http://localhost?foo=bar&baz=ding"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route, match with a query string",
route: new(Route).Host("www.example.com").Path("/api").Queries("foo", "bar", "baz", "ding"),
request: newRequest("GET", "http://www.example.com/api?foo=bar&baz=ding"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route, match with a query string out of order",
route: new(Route).Host("www.example.com").Path("/api").Queries("foo", "bar", "baz", "ding"),
request: newRequest("GET", "http://www.example.com/api?baz=ding&foo=bar"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route, bad query",
route: new(Route).Queries("foo", "bar", "baz", "ding"),
request: newRequest("GET", "http://localhost?foo=bar&baz=dong"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
{
title: "Queries route with pattern, match",
route: new(Route).Queries("foo", "{v1}"),
request: newRequest("GET", "http://localhost?foo=bar"),
vars: map[string]string{"v1": "bar"},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route with multiple patterns, match",
route: new(Route).Queries("foo", "{v1}", "baz", "{v2}"),
request: newRequest("GET", "http://localhost?foo=bar&baz=ding"),
vars: map[string]string{"v1": "bar", "v2": "ding"},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route with regexp pattern, match",
route: new(Route).Queries("foo", "{v1:[0-9]+}"),
request: newRequest("GET", "http://localhost?foo=10"),
vars: map[string]string{"v1": "10"},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Queries route with regexp pattern, regexp does not match",
route: new(Route).Queries("foo", "{v1:[0-9]+}"),
request: newRequest("GET", "http://localhost?foo=a"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestSchemes(t *testing.T) {
tests := []routeTest{
// Schemes
{
title: "Schemes route, match https",
route: new(Route).Schemes("https", "ftp"),
request: newRequest("GET", "https://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Schemes route, match ftp",
route: new(Route).Schemes("https", "ftp"),
request: newRequest("GET", "ftp://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "Schemes route, bad scheme",
route: new(Route).Schemes("https", "ftp"),
request: newRequest("GET", "http://localhost"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestMatcherFunc(t *testing.T) {
m := func(r *http.Request, m *RouteMatch) bool {
if r.URL.Host == "aaa.bbb.ccc" {
return true
}
return false
}
tests := []routeTest{
{
title: "MatchFunc route, match",
route: new(Route).MatcherFunc(m),
request: newRequest("GET", "http://aaa.bbb.ccc"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: true,
},
{
title: "MatchFunc route, non-match",
route: new(Route).MatcherFunc(m),
request: newRequest("GET", "http://aaa.222.ccc"),
vars: map[string]string{},
host: "",
path: "",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestSubRouter(t *testing.T) {
subrouter1 := new(Route).Host("{v1:[a-z]+}.google.com").Subrouter()
subrouter2 := new(Route).PathPrefix("/foo/{v1}").Subrouter()
tests := []routeTest{
{
route: subrouter1.Path("/{v2:[a-z]+}"),
request: newRequest("GET", "http://aaa.google.com/bbb"),
vars: map[string]string{"v1": "aaa", "v2": "bbb"},
host: "aaa.google.com",
path: "/bbb",
shouldMatch: true,
},
{
route: subrouter1.Path("/{v2:[a-z]+}"),
request: newRequest("GET", "http://111.google.com/111"),
vars: map[string]string{"v1": "aaa", "v2": "bbb"},
host: "aaa.google.com",
path: "/bbb",
shouldMatch: false,
},
{
route: subrouter2.Path("/baz/{v2}"),
request: newRequest("GET", "http://localhost/foo/bar/baz/ding"),
vars: map[string]string{"v1": "bar", "v2": "ding"},
host: "",
path: "/foo/bar/baz/ding",
shouldMatch: true,
},
{
route: subrouter2.Path("/baz/{v2}"),
request: newRequest("GET", "http://localhost/foo/bar"),
vars: map[string]string{"v1": "bar", "v2": "ding"},
host: "",
path: "/foo/bar/baz/ding",
shouldMatch: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
func TestNamedRoutes(t *testing.T) {
r1 := NewRouter()
r1.NewRoute().Name("a")
r1.NewRoute().Name("b")
r1.NewRoute().Name("c")
r2 := r1.NewRoute().Subrouter()
r2.NewRoute().Name("d")
r2.NewRoute().Name("e")
r2.NewRoute().Name("f")
r3 := r2.NewRoute().Subrouter()
r3.NewRoute().Name("g")
r3.NewRoute().Name("h")
r3.NewRoute().Name("i")
if r1.namedRoutes == nil || len(r1.namedRoutes) != 9 {
t.Errorf("Expected 9 named routes, got %v", r1.namedRoutes)
} else if r1.Get("i") == nil {
t.Errorf("Subroute name not registered")
}
}
func TestStrictSlash(t *testing.T) {
r := NewRouter()
r.StrictSlash(true)
tests := []routeTest{
{
title: "Redirect path without slash",
route: r.NewRoute().Path("/111/"),
request: newRequest("GET", "http://localhost/111"),
vars: map[string]string{},
host: "",
path: "/111/",
shouldMatch: true,
shouldRedirect: true,
},
{
title: "Do not redirect path with slash",
route: r.NewRoute().Path("/111/"),
request: newRequest("GET", "http://localhost/111/"),
vars: map[string]string{},
host: "",
path: "/111/",
shouldMatch: true,
shouldRedirect: false,
},
{
title: "Redirect path with slash",
route: r.NewRoute().Path("/111"),
request: newRequest("GET", "http://localhost/111/"),
vars: map[string]string{},
host: "",
path: "/111",
shouldMatch: true,
shouldRedirect: true,
},
{
title: "Do not redirect path without slash",
route: r.NewRoute().Path("/111"),
request: newRequest("GET", "http://localhost/111"),
vars: map[string]string{},
host: "",
path: "/111",
shouldMatch: true,
shouldRedirect: false,
},
{
title: "Propagate StrictSlash to subrouters",
route: r.NewRoute().PathPrefix("/static/").Subrouter().Path("/images/"),
request: newRequest("GET", "http://localhost/static/images"),
vars: map[string]string{},
host: "",
path: "/static/images/",
shouldMatch: true,
shouldRedirect: true,
},
{
title: "Ignore StrictSlash for path prefix",
route: r.NewRoute().PathPrefix("/static/"),
request: newRequest("GET", "http://localhost/static/logo.png"),
vars: map[string]string{},
host: "",
path: "/static/",
shouldMatch: true,
shouldRedirect: false,
},
}
for _, test := range tests {
testRoute(t, test)
}
}
// ----------------------------------------------------------------------------
// Helpers
// ----------------------------------------------------------------------------
func getRouteTemplate(route *Route) string {
host, path := "none", "none"
if route.regexp != nil {
if route.regexp.host != nil {
host = route.regexp.host.template
}
if route.regexp.path != nil {
path = route.regexp.path.template
}
}
return fmt.Sprintf("Host: %v, Path: %v", host, path)
}
func testRoute(t *testing.T, test routeTest) {
request := test.request
route := test.route
vars := test.vars
shouldMatch := test.shouldMatch
host := test.host
path := test.path
url := test.host + test.path
shouldRedirect := test.shouldRedirect
var match RouteMatch
ok := route.Match(request, &match)
if ok != shouldMatch {
msg := "Should match"
if !shouldMatch {
msg = "Should not match"
}
t.Errorf("(%v) %v:\nRoute: %#v\nRequest: %#v\nVars: %v\n", test.title, msg, route, request, vars)
return
}
if shouldMatch {
if test.vars != nil && !stringMapEqual(test.vars, match.Vars) {
t.Errorf("(%v) Vars not equal: expected %v, got %v", test.title, vars, match.Vars)
return
}
if host != "" {
u, _ := test.route.URLHost(mapToPairs(match.Vars)...)
if host != u.Host {
t.Errorf("(%v) URLHost not equal: expected %v, got %v -- %v", test.title, host, u.Host, getRouteTemplate(route))
return
}
}
if path != "" {
u, _ := route.URLPath(mapToPairs(match.Vars)...)
if path != u.Path {
t.Errorf("(%v) URLPath not equal: expected %v, got %v -- %v", test.title, path, u.Path, getRouteTemplate(route))
return
}
}
if url != "" {
u, _ := route.URL(mapToPairs(match.Vars)...)
if url != u.Host+u.Path {
t.Errorf("(%v) URL not equal: expected %v, got %v -- %v", test.title, url, u.Host+u.Path, getRouteTemplate(route))
return
}
}
if shouldRedirect && match.Handler == nil {
t.Errorf("(%v) Did not redirect", test.title)
return
}
if !shouldRedirect && match.Handler != nil {
t.Errorf("(%v) Unexpected redirect", test.title)
return
}
}
}
// Tests that the context is cleared or not cleared properly depending on
// the configuration of the router
func TestKeepContext(t *testing.T) {
func1 := func(w http.ResponseWriter, r *http.Request) {}
r := NewRouter()
r.HandleFunc("/", func1).Name("func1")
req, _ := http.NewRequest("GET", "http://localhost/", nil)
context.Set(req, "t", 1)
res := new(http.ResponseWriter)
r.ServeHTTP(*res, req)
if _, ok := context.GetOk(req, "t"); ok {
t.Error("Context should have been cleared at end of request")
}
r.KeepContext = true
req, _ = http.NewRequest("GET", "http://localhost/", nil)
context.Set(req, "t", 1)
r.ServeHTTP(*res, req)
if _, ok := context.GetOk(req, "t"); !ok {
t.Error("Context should NOT have been cleared at end of request")
}
}
type TestA301ResponseWriter struct {
hh http.Header
status int
}
func (ho TestA301ResponseWriter) Header() http.Header {
return http.Header(ho.hh)
}
func (ho TestA301ResponseWriter) Write(b []byte) (int, error) {
return 0, nil
}
func (ho TestA301ResponseWriter) WriteHeader(code int) {
ho.status = code
}
func Test301Redirect(t *testing.T) {
m := make(http.Header)
func1 := func(w http.ResponseWriter, r *http.Request) {}
func2 := func(w http.ResponseWriter, r *http.Request) {}
r := NewRouter()
r.HandleFunc("/api/", func2).Name("func2")
r.HandleFunc("/", func1).Name("func1")
req, _ := http.NewRequest("GET", "http://localhost//api/?abc=def", nil)
res := TestA301ResponseWriter{
hh: m,
status: 0,
}
r.ServeHTTP(&res, req)
if "http://localhost/api/?abc=def" != res.hh["Location"][0] {
t.Errorf("Should have complete URL with query string")
}
}
// https://plus.google.com/101022900381697718949/posts/eWy6DjFJ6uW
func TestSubrouterHeader(t *testing.T) {
expected := "func1 response"
func1 := func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, expected)
}
func2 := func(http.ResponseWriter, *http.Request) {}
r := NewRouter()
s := r.Headers("SomeSpecialHeader", "").Subrouter()
s.HandleFunc("/", func1).Name("func1")
r.HandleFunc("/", func2).Name("func2")
req, _ := http.NewRequest("GET", "http://localhost/", nil)
req.Header.Add("SomeSpecialHeader", "foo")
match := new(RouteMatch)
matched := r.Match(req, match)
if !matched {
t.Errorf("Should match request")
}
if match.Route.GetName() != "func1" {
t.Errorf("Expecting func1 handler, got %s", match.Route.GetName())
}
resp := NewRecorder()
match.Handler.ServeHTTP(resp, req)
if resp.Body.String() != expected {
t.Errorf("Expecting %q", expected)
}
}
// mapToPairs converts a string map to a slice of string pairs
func mapToPairs(m map[string]string) []string {
var i int
p := make([]string, len(m)*2)
for k, v := range m {
p[i] = k
p[i+1] = v
i += 2
}
return p
}
// stringMapEqual checks the equality of two string maps
func stringMapEqual(m1, m2 map[string]string) bool {
nil1 := m1 == nil
nil2 := m2 == nil
if nil1 != nil2 || len(m1) != len(m2) {
return false
}
for k, v := range m1 {
if v != m2[k] {
return false
}
}
return true
}
// newRequest is a helper function to create a new request with a method and url
func newRequest(method, url string) *http.Request {
req, err := http.NewRequest(method, url, nil)
if err != nil {
panic(err)
}
return req
}

View File

@@ -1,714 +0,0 @@
// Old tests ported to Go1. This is a mess. Want to drop it one day.
// Copyright 2011 Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"bytes"
"net/http"
"testing"
)
// ----------------------------------------------------------------------------
// ResponseRecorder
// ----------------------------------------------------------------------------
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// ResponseRecorder is an implementation of http.ResponseWriter that
// records its mutations for later inspection in tests.
type ResponseRecorder struct {
Code int // the HTTP response code from WriteHeader
HeaderMap http.Header // the HTTP response headers
Body *bytes.Buffer // if non-nil, the bytes.Buffer to append written data to
Flushed bool
}
// NewRecorder returns an initialized ResponseRecorder.
func NewRecorder() *ResponseRecorder {
return &ResponseRecorder{
HeaderMap: make(http.Header),
Body: new(bytes.Buffer),
}
}
// DefaultRemoteAddr is the default remote address to return in RemoteAddr if
// an explicit DefaultRemoteAddr isn't set on ResponseRecorder.
const DefaultRemoteAddr = "1.2.3.4"
// Header returns the response headers.
func (rw *ResponseRecorder) Header() http.Header {
return rw.HeaderMap
}
// Write always succeeds and writes to rw.Body, if not nil.
func (rw *ResponseRecorder) Write(buf []byte) (int, error) {
if rw.Body != nil {
rw.Body.Write(buf)
}
if rw.Code == 0 {
rw.Code = http.StatusOK
}
return len(buf), nil
}
// WriteHeader sets rw.Code.
func (rw *ResponseRecorder) WriteHeader(code int) {
rw.Code = code
}
// Flush sets rw.Flushed to true.
func (rw *ResponseRecorder) Flush() {
rw.Flushed = true
}
// ----------------------------------------------------------------------------
func TestRouteMatchers(t *testing.T) {
var scheme, host, path, query, method string
var headers map[string]string
var resultVars map[bool]map[string]string
router := NewRouter()
router.NewRoute().Host("{var1}.google.com").
Path("/{var2:[a-z]+}/{var3:[0-9]+}").
Queries("foo", "bar").
Methods("GET").
Schemes("https").
Headers("x-requested-with", "XMLHttpRequest")
router.NewRoute().Host("www.{var4}.com").
PathPrefix("/foo/{var5:[a-z]+}/{var6:[0-9]+}").
Queries("baz", "ding").
Methods("POST").
Schemes("http").
Headers("Content-Type", "application/json")
reset := func() {
// Everything match.
scheme = "https"
host = "www.google.com"
path = "/product/42"
query = "?foo=bar"
method = "GET"
headers = map[string]string{"X-Requested-With": "XMLHttpRequest"}
resultVars = map[bool]map[string]string{
true: {"var1": "www", "var2": "product", "var3": "42"},
false: {},
}
}
reset2 := func() {
// Everything match.
scheme = "http"
host = "www.google.com"
path = "/foo/product/42/path/that/is/ignored"
query = "?baz=ding"
method = "POST"
headers = map[string]string{"Content-Type": "application/json"}
resultVars = map[bool]map[string]string{
true: {"var4": "google", "var5": "product", "var6": "42"},
false: {},
}
}
match := func(shouldMatch bool) {
url := scheme + "://" + host + path + query
request, _ := http.NewRequest(method, url, nil)
for key, value := range headers {
request.Header.Add(key, value)
}
var routeMatch RouteMatch
matched := router.Match(request, &routeMatch)
if matched != shouldMatch {
// Need better messages. :)
if matched {
t.Errorf("Should match.")
} else {
t.Errorf("Should not match.")
}
}
if matched {
currentRoute := routeMatch.Route
if currentRoute == nil {
t.Errorf("Expected a current route.")
}
vars := routeMatch.Vars
expectedVars := resultVars[shouldMatch]
if len(vars) != len(expectedVars) {
t.Errorf("Expected vars: %v Got: %v.", expectedVars, vars)
}
for name, value := range vars {
if expectedVars[name] != value {
t.Errorf("Expected vars: %v Got: %v.", expectedVars, vars)
}
}
}
}
// 1st route --------------------------------------------------------------
// Everything match.
reset()
match(true)
// Scheme doesn't match.
reset()
scheme = "http"
match(false)
// Host doesn't match.
reset()
host = "www.mygoogle.com"
match(false)
// Path doesn't match.
reset()
path = "/product/notdigits"
match(false)
// Query doesn't match.
reset()
query = "?foo=baz"
match(false)
// Method doesn't match.
reset()
method = "POST"
match(false)
// Header doesn't match.
reset()
headers = map[string]string{}
match(false)
// Everything match, again.
reset()
match(true)
// 2nd route --------------------------------------------------------------
// Everything match.
reset2()
match(true)
// Scheme doesn't match.
reset2()
scheme = "https"
match(false)
// Host doesn't match.
reset2()
host = "sub.google.com"
match(false)
// Path doesn't match.
reset2()
path = "/bar/product/42"
match(false)
// Query doesn't match.
reset2()
query = "?foo=baz"
match(false)
// Method doesn't match.
reset2()
method = "GET"
match(false)
// Header doesn't match.
reset2()
headers = map[string]string{}
match(false)
// Everything match, again.
reset2()
match(true)
}
type headerMatcherTest struct {
matcher headerMatcher
headers map[string]string
result bool
}
var headerMatcherTests = []headerMatcherTest{
{
matcher: headerMatcher(map[string]string{"x-requested-with": "XMLHttpRequest"}),
headers: map[string]string{"X-Requested-With": "XMLHttpRequest"},
result: true,
},
{
matcher: headerMatcher(map[string]string{"x-requested-with": ""}),
headers: map[string]string{"X-Requested-With": "anything"},
result: true,
},
{
matcher: headerMatcher(map[string]string{"x-requested-with": "XMLHttpRequest"}),
headers: map[string]string{},
result: false,
},
}
type hostMatcherTest struct {
matcher *Route
url string
vars map[string]string
result bool
}
var hostMatcherTests = []hostMatcherTest{
{
matcher: NewRouter().NewRoute().Host("{foo:[a-z][a-z][a-z]}.{bar:[a-z][a-z][a-z]}.{baz:[a-z][a-z][a-z]}"),
url: "http://abc.def.ghi/",
vars: map[string]string{"foo": "abc", "bar": "def", "baz": "ghi"},
result: true,
},
{
matcher: NewRouter().NewRoute().Host("{foo:[a-z][a-z][a-z]}.{bar:[a-z][a-z][a-z]}.{baz:[a-z][a-z][a-z]}"),
url: "http://a.b.c/",
vars: map[string]string{"foo": "abc", "bar": "def", "baz": "ghi"},
result: false,
},
}
type methodMatcherTest struct {
matcher methodMatcher
method string
result bool
}
var methodMatcherTests = []methodMatcherTest{
{
matcher: methodMatcher([]string{"GET", "POST", "PUT"}),
method: "GET",
result: true,
},
{
matcher: methodMatcher([]string{"GET", "POST", "PUT"}),
method: "POST",
result: true,
},
{
matcher: methodMatcher([]string{"GET", "POST", "PUT"}),
method: "PUT",
result: true,
},
{
matcher: methodMatcher([]string{"GET", "POST", "PUT"}),
method: "DELETE",
result: false,
},
}
type pathMatcherTest struct {
matcher *Route
url string
vars map[string]string
result bool
}
var pathMatcherTests = []pathMatcherTest{
{
matcher: NewRouter().NewRoute().Path("/{foo:[0-9][0-9][0-9]}/{bar:[0-9][0-9][0-9]}/{baz:[0-9][0-9][0-9]}"),
url: "http://localhost:8080/123/456/789",
vars: map[string]string{"foo": "123", "bar": "456", "baz": "789"},
result: true,
},
{
matcher: NewRouter().NewRoute().Path("/{foo:[0-9][0-9][0-9]}/{bar:[0-9][0-9][0-9]}/{baz:[0-9][0-9][0-9]}"),
url: "http://localhost:8080/1/2/3",
vars: map[string]string{"foo": "123", "bar": "456", "baz": "789"},
result: false,
},
}
type schemeMatcherTest struct {
matcher schemeMatcher
url string
result bool
}
var schemeMatcherTests = []schemeMatcherTest{
{
matcher: schemeMatcher([]string{"http", "https"}),
url: "http://localhost:8080/",
result: true,
},
{
matcher: schemeMatcher([]string{"http", "https"}),
url: "https://localhost:8080/",
result: true,
},
{
matcher: schemeMatcher([]string{"https"}),
url: "http://localhost:8080/",
result: false,
},
{
matcher: schemeMatcher([]string{"http"}),
url: "https://localhost:8080/",
result: false,
},
}
type urlBuildingTest struct {
route *Route
vars []string
url string
}
var urlBuildingTests = []urlBuildingTest{
{
route: new(Route).Host("foo.domain.com"),
vars: []string{},
url: "http://foo.domain.com",
},
{
route: new(Route).Host("{subdomain}.domain.com"),
vars: []string{"subdomain", "bar"},
url: "http://bar.domain.com",
},
{
route: new(Route).Host("foo.domain.com").Path("/articles"),
vars: []string{},
url: "http://foo.domain.com/articles",
},
{
route: new(Route).Path("/articles"),
vars: []string{},
url: "/articles",
},
{
route: new(Route).Path("/articles/{category}/{id:[0-9]+}"),
vars: []string{"category", "technology", "id", "42"},
url: "/articles/technology/42",
},
{
route: new(Route).Host("{subdomain}.domain.com").Path("/articles/{category}/{id:[0-9]+}"),
vars: []string{"subdomain", "foo", "category", "technology", "id", "42"},
url: "http://foo.domain.com/articles/technology/42",
},
}
func TestHeaderMatcher(t *testing.T) {
for _, v := range headerMatcherTests {
request, _ := http.NewRequest("GET", "http://localhost:8080/", nil)
for key, value := range v.headers {
request.Header.Add(key, value)
}
var routeMatch RouteMatch
result := v.matcher.Match(request, &routeMatch)
if result != v.result {
if v.result {
t.Errorf("%#v: should match %v.", v.matcher, request.Header)
} else {
t.Errorf("%#v: should not match %v.", v.matcher, request.Header)
}
}
}
}
func TestHostMatcher(t *testing.T) {
for _, v := range hostMatcherTests {
request, _ := http.NewRequest("GET", v.url, nil)
var routeMatch RouteMatch
result := v.matcher.Match(request, &routeMatch)
vars := routeMatch.Vars
if result != v.result {
if v.result {
t.Errorf("%#v: should match %v.", v.matcher, v.url)
} else {
t.Errorf("%#v: should not match %v.", v.matcher, v.url)
}
}
if result {
if len(vars) != len(v.vars) {
t.Errorf("%#v: vars length should be %v, got %v.", v.matcher, len(v.vars), len(vars))
}
for name, value := range vars {
if v.vars[name] != value {
t.Errorf("%#v: expected value %v for key %v, got %v.", v.matcher, v.vars[name], name, value)
}
}
} else {
if len(vars) != 0 {
t.Errorf("%#v: vars length should be 0, got %v.", v.matcher, len(vars))
}
}
}
}
func TestMethodMatcher(t *testing.T) {
for _, v := range methodMatcherTests {
request, _ := http.NewRequest(v.method, "http://localhost:8080/", nil)
var routeMatch RouteMatch
result := v.matcher.Match(request, &routeMatch)
if result != v.result {
if v.result {
t.Errorf("%#v: should match %v.", v.matcher, v.method)
} else {
t.Errorf("%#v: should not match %v.", v.matcher, v.method)
}
}
}
}
func TestPathMatcher(t *testing.T) {
for _, v := range pathMatcherTests {
request, _ := http.NewRequest("GET", v.url, nil)
var routeMatch RouteMatch
result := v.matcher.Match(request, &routeMatch)
vars := routeMatch.Vars
if result != v.result {
if v.result {
t.Errorf("%#v: should match %v.", v.matcher, v.url)
} else {
t.Errorf("%#v: should not match %v.", v.matcher, v.url)
}
}
if result {
if len(vars) != len(v.vars) {
t.Errorf("%#v: vars length should be %v, got %v.", v.matcher, len(v.vars), len(vars))
}
for name, value := range vars {
if v.vars[name] != value {
t.Errorf("%#v: expected value %v for key %v, got %v.", v.matcher, v.vars[name], name, value)
}
}
} else {
if len(vars) != 0 {
t.Errorf("%#v: vars length should be 0, got %v.", v.matcher, len(vars))
}
}
}
}
func TestSchemeMatcher(t *testing.T) {
for _, v := range schemeMatcherTests {
request, _ := http.NewRequest("GET", v.url, nil)
var routeMatch RouteMatch
result := v.matcher.Match(request, &routeMatch)
if result != v.result {
if v.result {
t.Errorf("%#v: should match %v.", v.matcher, v.url)
} else {
t.Errorf("%#v: should not match %v.", v.matcher, v.url)
}
}
}
}
func TestUrlBuilding(t *testing.T) {
for _, v := range urlBuildingTests {
u, _ := v.route.URL(v.vars...)
url := u.String()
if url != v.url {
t.Errorf("expected %v, got %v", v.url, url)
/*
reversePath := ""
reverseHost := ""
if v.route.pathTemplate != nil {
reversePath = v.route.pathTemplate.Reverse
}
if v.route.hostTemplate != nil {
reverseHost = v.route.hostTemplate.Reverse
}
t.Errorf("%#v:\nexpected: %q\ngot: %q\nreverse path: %q\nreverse host: %q", v.route, v.url, url, reversePath, reverseHost)
*/
}
}
ArticleHandler := func(w http.ResponseWriter, r *http.Request) {
}
router := NewRouter()
router.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler).Name("article")
url, _ := router.Get("article").URL("category", "technology", "id", "42")
expected := "/articles/technology/42"
if url.String() != expected {
t.Errorf("Expected %v, got %v", expected, url.String())
}
}
func TestMatchedRouteName(t *testing.T) {
routeName := "stock"
router := NewRouter()
route := router.NewRoute().Path("/products/").Name(routeName)
url := "http://www.domain.com/products/"
request, _ := http.NewRequest("GET", url, nil)
var rv RouteMatch
ok := router.Match(request, &rv)
if !ok || rv.Route != route {
t.Errorf("Expected same route, got %+v.", rv.Route)
}
retName := rv.Route.GetName()
if retName != routeName {
t.Errorf("Expected %q, got %q.", routeName, retName)
}
}
func TestSubRouting(t *testing.T) {
// Example from docs.
router := NewRouter()
subrouter := router.NewRoute().Host("www.domain.com").Subrouter()
route := subrouter.NewRoute().Path("/products/").Name("products")
url := "http://www.domain.com/products/"
request, _ := http.NewRequest("GET", url, nil)
var rv RouteMatch
ok := router.Match(request, &rv)
if !ok || rv.Route != route {
t.Errorf("Expected same route, got %+v.", rv.Route)
}
u, _ := router.Get("products").URL()
builtUrl := u.String()
// Yay, subroute aware of the domain when building!
if builtUrl != url {
t.Errorf("Expected %q, got %q.", url, builtUrl)
}
}
func TestVariableNames(t *testing.T) {
route := new(Route).Host("{arg1}.domain.com").Path("/{arg1}/{arg2:[0-9]+}")
if route.err == nil {
t.Errorf("Expected error for duplicated variable names")
}
}
func TestRedirectSlash(t *testing.T) {
var route *Route
var routeMatch RouteMatch
r := NewRouter()
r.StrictSlash(false)
route = r.NewRoute()
if route.strictSlash != false {
t.Errorf("Expected false redirectSlash.")
}
r.StrictSlash(true)
route = r.NewRoute()
if route.strictSlash != true {
t.Errorf("Expected true redirectSlash.")
}
route = new(Route)
route.strictSlash = true
route.Path("/{arg1}/{arg2:[0-9]+}/")
request, _ := http.NewRequest("GET", "http://localhost/foo/123", nil)
routeMatch = RouteMatch{}
_ = route.Match(request, &routeMatch)
vars := routeMatch.Vars
if vars["arg1"] != "foo" {
t.Errorf("Expected foo.")
}
if vars["arg2"] != "123" {
t.Errorf("Expected 123.")
}
rsp := NewRecorder()
routeMatch.Handler.ServeHTTP(rsp, request)
if rsp.HeaderMap.Get("Location") != "http://localhost/foo/123/" {
t.Errorf("Expected redirect header.")
}
route = new(Route)
route.strictSlash = true
route.Path("/{arg1}/{arg2:[0-9]+}")
request, _ = http.NewRequest("GET", "http://localhost/foo/123/", nil)
routeMatch = RouteMatch{}
_ = route.Match(request, &routeMatch)
vars = routeMatch.Vars
if vars["arg1"] != "foo" {
t.Errorf("Expected foo.")
}
if vars["arg2"] != "123" {
t.Errorf("Expected 123.")
}
rsp = NewRecorder()
routeMatch.Handler.ServeHTTP(rsp, request)
if rsp.HeaderMap.Get("Location") != "http://localhost/foo/123" {
t.Errorf("Expected redirect header.")
}
}
// Test for the new regexp library, still not available in stable Go.
func TestNewRegexp(t *testing.T) {
var p *routeRegexp
var matches []string
tests := map[string]map[string][]string{
"/{foo:a{2}}": {
"/a": nil,
"/aa": {"aa"},
"/aaa": nil,
"/aaaa": nil,
},
"/{foo:a{2,}}": {
"/a": nil,
"/aa": {"aa"},
"/aaa": {"aaa"},
"/aaaa": {"aaaa"},
},
"/{foo:a{2,3}}": {
"/a": nil,
"/aa": {"aa"},
"/aaa": {"aaa"},
"/aaaa": nil,
},
"/{foo:[a-z]{3}}/{bar:[a-z]{2}}": {
"/a": nil,
"/ab": nil,
"/abc": nil,
"/abcd": nil,
"/abc/ab": {"abc", "ab"},
"/abc/abc": nil,
"/abcd/ab": nil,
},
`/{foo:\w{3,}}/{bar:\d{2,}}`: {
"/a": nil,
"/ab": nil,
"/abc": nil,
"/abc/1": nil,
"/abc/12": {"abc", "12"},
"/abcd/12": {"abcd", "12"},
"/abcd/123": {"abcd", "123"},
},
}
for pattern, paths := range tests {
p, _ = newRouteRegexp(pattern, false, false, false, false)
for path, result := range paths {
matches = p.regexp.FindStringSubmatch(path)
if result == nil {
if matches != nil {
t.Errorf("%v should not match %v.", pattern, path)
}
} else {
if len(matches) != len(result)+1 {
t.Errorf("Expected %v matches, got %v.", len(result)+1, len(matches))
} else {
for k, v := range result {
if matches[k+1] != v {
t.Errorf("Expected %v, got %v.", v, matches[k+1])
}
}
}
}
}
}
}

View File

@@ -1,4 +0,0 @@
_*
*.swp
*.[568]
[568].out

View File

@@ -1,91 +0,0 @@
// These tests verify the test running logic.
package check_test
import (
. "github.com/minio/check"
"time"
)
var benchmarkS = Suite(&BenchmarkS{})
type BenchmarkS struct{}
func (s *BenchmarkS) TestCountSuite(c *C) {
suitesRun += 1
}
func (s *BenchmarkS) TestBasicTestTiming(c *C) {
helper := FixtureHelper{sleepOn: "Test1", sleep: 1000000 * time.Nanosecond}
output := String{}
runConf := RunConf{Output: &output, Verbose: true}
Run(&helper, &runConf)
expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test1\t0\\.001s\n" +
"PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t0\\.000s\n"
c.Assert(output.value, Matches, expected)
}
func (s *BenchmarkS) TestStreamTestTiming(c *C) {
helper := FixtureHelper{sleepOn: "SetUpSuite", sleep: 1000000 * time.Nanosecond}
output := String{}
runConf := RunConf{Output: &output, Stream: true}
Run(&helper, &runConf)
expected := "(?s).*\nPASS: check_test\\.go:[0-9]+: FixtureHelper\\.SetUpSuite\t *0\\.001s\n.*"
c.Assert(output.value, Matches, expected)
}
func (s *BenchmarkS) TestBenchmark(c *C) {
helper := FixtureHelper{sleep: 100000}
output := String{}
runConf := RunConf{
Output: &output,
Benchmark: true,
BenchmarkTime: 10000000,
Filter: "Benchmark1",
}
Run(&helper, &runConf)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Benchmark1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Benchmark1")
c.Check(helper.calls[6], Equals, "TearDownTest")
// ... and more.
expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark1\t *100\t *[12][0-9]{5} ns/op\n"
c.Assert(output.value, Matches, expected)
}
func (s *BenchmarkS) TestBenchmarkBytes(c *C) {
helper := FixtureHelper{sleep: 100000}
output := String{}
runConf := RunConf{
Output: &output,
Benchmark: true,
BenchmarkTime: 10000000,
Filter: "Benchmark2",
}
Run(&helper, &runConf)
expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark2\t *100\t *[12][0-9]{5} ns/op\t *[4-9]\\.[0-9]{2} MB/s\n"
c.Assert(output.value, Matches, expected)
}
func (s *BenchmarkS) TestBenchmarkMem(c *C) {
helper := FixtureHelper{sleep: 100000}
output := String{}
runConf := RunConf{
Output: &output,
Benchmark: true,
BenchmarkMem: true,
BenchmarkTime: 10000000,
Filter: "Benchmark3",
}
Run(&helper, &runConf)
expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Benchmark3\t *100\t *[12][0-9]{5} ns/op\t *[0-9]+ B/op\t *[1-9] allocs/op\n"
c.Assert(output.value, Matches, expected)
}

View File

@@ -1,82 +0,0 @@
// These initial tests are for bootstrapping. They verify that we can
// basically use the testing infrastructure itself to check if the test
// system is working.
//
// These tests use will break down the test runner badly in case of
// errors because if they simply fail, we can't be sure the developer
// will ever see anything (because failing means the failing system
// somehow isn't working! :-)
//
// Do not assume *any* internal functionality works as expected besides
// what's actually tested here.
package check_test
import (
"fmt"
"github.com/minio/check"
"strings"
)
type BootstrapS struct{}
var boostrapS = check.Suite(&BootstrapS{})
func (s *BootstrapS) TestCountSuite(c *check.C) {
suitesRun += 1
}
func (s *BootstrapS) TestFailedAndFail(c *check.C) {
if c.Failed() {
critical("c.Failed() must be false first!")
}
c.Fail()
if !c.Failed() {
critical("c.Fail() didn't put the test in a failed state!")
}
c.Succeed()
}
func (s *BootstrapS) TestFailedAndSucceed(c *check.C) {
c.Fail()
c.Succeed()
if c.Failed() {
critical("c.Succeed() didn't put the test back in a non-failed state")
}
}
func (s *BootstrapS) TestLogAndGetTestLog(c *check.C) {
c.Log("Hello there!")
log := c.GetTestLog()
if log != "Hello there!\n" {
critical(fmt.Sprintf("Log() or GetTestLog() is not working! Got: %#v", log))
}
}
func (s *BootstrapS) TestLogfAndGetTestLog(c *check.C) {
c.Logf("Hello %v", "there!")
log := c.GetTestLog()
if log != "Hello there!\n" {
critical(fmt.Sprintf("Logf() or GetTestLog() is not working! Got: %#v", log))
}
}
func (s *BootstrapS) TestRunShowsErrors(c *check.C) {
output := String{}
check.Run(&FailHelper{}, &check.RunConf{Output: &output})
if strings.Index(output.value, "Expected failure!") == -1 {
critical(fmt.Sprintf("RunWithWriter() output did not contain the "+
"expected failure! Got: %#v",
output.value))
}
}
func (s *BootstrapS) TestRunDoesntShowSuccesses(c *check.C) {
output := String{}
check.Run(&SuccessHelper{}, &check.RunConf{Output: &output})
if strings.Index(output.value, "Expected success!") != -1 {
critical(fmt.Sprintf("RunWithWriter() output contained a successful "+
"test! Got: %#v",
output.value))
}
}

View File

@@ -1,207 +0,0 @@
// This file contains just a few generic helpers which are used by the
// other test files.
package check_test
import (
"flag"
"fmt"
"os"
"regexp"
"runtime"
"testing"
"time"
"github.com/minio/check"
)
// We count the number of suites run at least to get a vague hint that the
// test suite is behaving as it should. Otherwise a bug introduced at the
// very core of the system could go unperceived.
const suitesRunExpected = 8
var suitesRun int = 0
func Test(t *testing.T) {
check.TestingT(t)
if suitesRun != suitesRunExpected && flag.Lookup("check.f").Value.String() == "" {
critical(fmt.Sprintf("Expected %d suites to run rather than %d",
suitesRunExpected, suitesRun))
}
}
// -----------------------------------------------------------------------
// Helper functions.
// Break down badly. This is used in test cases which can't yet assume
// that the fundamental bits are working.
func critical(error string) {
fmt.Fprintln(os.Stderr, "CRITICAL: "+error)
os.Exit(1)
}
// Return the file line where it's called.
func getMyLine() int {
if _, _, line, ok := runtime.Caller(1); ok {
return line
}
return -1
}
// -----------------------------------------------------------------------
// Helper type implementing a basic io.Writer for testing output.
// Type implementing the io.Writer interface for analyzing output.
type String struct {
value string
}
// The only function required by the io.Writer interface. Will append
// written data to the String.value string.
func (s *String) Write(p []byte) (n int, err error) {
s.value += string(p)
return len(p), nil
}
// Trivial wrapper to test errors happening on a different file
// than the test itself.
func checkEqualWrapper(c *check.C, obtained, expected interface{}) (result bool, line int) {
return c.Check(obtained, check.Equals, expected), getMyLine()
}
// -----------------------------------------------------------------------
// Helper suite for testing basic fail behavior.
type FailHelper struct {
testLine int
}
func (s *FailHelper) TestLogAndFail(c *check.C) {
s.testLine = getMyLine() - 1
c.Log("Expected failure!")
c.Fail()
}
// -----------------------------------------------------------------------
// Helper suite for testing basic success behavior.
type SuccessHelper struct{}
func (s *SuccessHelper) TestLogAndSucceed(c *check.C) {
c.Log("Expected success!")
}
// -----------------------------------------------------------------------
// Helper suite for testing ordering and behavior of fixture.
type FixtureHelper struct {
calls []string
panicOn string
skip bool
skipOnN int
sleepOn string
sleep time.Duration
bytes int64
}
func (s *FixtureHelper) trace(name string, c *check.C) {
s.calls = append(s.calls, name)
if name == s.panicOn {
panic(name)
}
if s.sleep > 0 && s.sleepOn == name {
time.Sleep(s.sleep)
}
if s.skip && s.skipOnN == len(s.calls)-1 {
c.Skip("skipOnN == n")
}
}
func (s *FixtureHelper) SetUpSuite(c *check.C) {
s.trace("SetUpSuite", c)
}
func (s *FixtureHelper) TearDownSuite(c *check.C) {
s.trace("TearDownSuite", c)
}
func (s *FixtureHelper) SetUpTest(c *check.C) {
s.trace("SetUpTest", c)
}
func (s *FixtureHelper) TearDownTest(c *check.C) {
s.trace("TearDownTest", c)
}
func (s *FixtureHelper) Test1(c *check.C) {
s.trace("Test1", c)
}
func (s *FixtureHelper) Test2(c *check.C) {
s.trace("Test2", c)
}
func (s *FixtureHelper) Benchmark1(c *check.C) {
s.trace("Benchmark1", c)
for i := 0; i < c.N; i++ {
time.Sleep(s.sleep)
}
}
func (s *FixtureHelper) Benchmark2(c *check.C) {
s.trace("Benchmark2", c)
c.SetBytes(1024)
for i := 0; i < c.N; i++ {
time.Sleep(s.sleep)
}
}
func (s *FixtureHelper) Benchmark3(c *check.C) {
var x []int64
s.trace("Benchmark3", c)
for i := 0; i < c.N; i++ {
time.Sleep(s.sleep)
x = make([]int64, 5)
_ = x
}
}
// -----------------------------------------------------------------------
// Helper which checks the state of the test and ensures that it matches
// the given expectations. Depends on c.Errorf() working, so shouldn't
// be used to test this one function.
type expectedState struct {
name string
result interface{}
failed bool
log string
}
// Verify the state of the test. Note that since this also verifies if
// the test is supposed to be in a failed state, no other checks should
// be done in addition to what is being tested.
func checkState(c *check.C, result interface{}, expected *expectedState) {
failed := c.Failed()
c.Succeed()
log := c.GetTestLog()
matched, matchError := regexp.MatchString("^"+expected.log+"$", log)
if matchError != nil {
c.Errorf("Error in matching expression used in testing %s",
expected.name)
} else if !matched {
c.Errorf("%s logged:\n----------\n%s----------\n\nExpected:\n----------\n%s\n----------",
expected.name, log, expected.log)
}
if result != expected.result {
c.Errorf("%s returned %#v rather than %#v",
expected.name, result, expected.result)
}
if failed != expected.failed {
if failed {
c.Errorf("%s has failed when it shouldn't", expected.name)
} else {
c.Errorf("%s has not failed when it should", expected.name)
}
}
}

View File

@@ -1,272 +0,0 @@
package check_test
import (
"errors"
"github.com/minio/check"
"reflect"
"runtime"
)
type CheckersS struct{}
var _ = check.Suite(&CheckersS{})
func testInfo(c *check.C, checker check.Checker, name string, paramNames []string) {
info := checker.Info()
if info.Name != name {
c.Fatalf("Got name %s, expected %s", info.Name, name)
}
if !reflect.DeepEqual(info.Params, paramNames) {
c.Fatalf("Got param names %#v, expected %#v", info.Params, paramNames)
}
}
func testCheck(c *check.C, checker check.Checker, result bool, error string, params ...interface{}) ([]interface{}, []string) {
info := checker.Info()
if len(params) != len(info.Params) {
c.Fatalf("unexpected param count in test; expected %d got %d", len(info.Params), len(params))
}
names := append([]string{}, info.Params...)
result_, error_ := checker.Check(params, names)
if result_ != result || error_ != error {
c.Fatalf("%s.Check(%#v) returned (%#v, %#v) rather than (%#v, %#v)",
info.Name, params, result_, error_, result, error)
}
return params, names
}
func (s *CheckersS) TestComment(c *check.C) {
bug := check.Commentf("a %d bc", 42)
comment := bug.CheckCommentString()
if comment != "a 42 bc" {
c.Fatalf("Commentf returned %#v", comment)
}
}
func (s *CheckersS) TestIsNil(c *check.C) {
testInfo(c, check.IsNil, "IsNil", []string{"value"})
testCheck(c, check.IsNil, true, "", nil)
testCheck(c, check.IsNil, false, "", "a")
testCheck(c, check.IsNil, true, "", (chan int)(nil))
testCheck(c, check.IsNil, false, "", make(chan int))
testCheck(c, check.IsNil, true, "", (error)(nil))
testCheck(c, check.IsNil, false, "", errors.New(""))
testCheck(c, check.IsNil, true, "", ([]int)(nil))
testCheck(c, check.IsNil, false, "", make([]int, 1))
testCheck(c, check.IsNil, false, "", int(0))
}
func (s *CheckersS) TestNotNil(c *check.C) {
testInfo(c, check.NotNil, "NotNil", []string{"value"})
testCheck(c, check.NotNil, false, "", nil)
testCheck(c, check.NotNil, true, "", "a")
testCheck(c, check.NotNil, false, "", (chan int)(nil))
testCheck(c, check.NotNil, true, "", make(chan int))
testCheck(c, check.NotNil, false, "", (error)(nil))
testCheck(c, check.NotNil, true, "", errors.New(""))
testCheck(c, check.NotNil, false, "", ([]int)(nil))
testCheck(c, check.NotNil, true, "", make([]int, 1))
}
func (s *CheckersS) TestNot(c *check.C) {
testInfo(c, check.Not(check.IsNil), "Not(IsNil)", []string{"value"})
testCheck(c, check.Not(check.IsNil), false, "", nil)
testCheck(c, check.Not(check.IsNil), true, "", "a")
}
type simpleStruct struct {
i int
}
func (s *CheckersS) TestEquals(c *check.C) {
testInfo(c, check.Equals, "Equals", []string{"obtained", "expected"})
// The simplest.
testCheck(c, check.Equals, true, "", 42, 42)
testCheck(c, check.Equals, false, "", 42, 43)
// Different native types.
testCheck(c, check.Equals, false, "", int32(42), int64(42))
// With nil.
testCheck(c, check.Equals, false, "", 42, nil)
// Slices
testCheck(c, check.Equals, false, "runtime error: comparing uncomparable type []uint8", []byte{1, 2}, []byte{1, 2})
// Struct values
testCheck(c, check.Equals, true, "", simpleStruct{1}, simpleStruct{1})
testCheck(c, check.Equals, false, "", simpleStruct{1}, simpleStruct{2})
// Struct pointers
testCheck(c, check.Equals, false, "", &simpleStruct{1}, &simpleStruct{1})
testCheck(c, check.Equals, false, "", &simpleStruct{1}, &simpleStruct{2})
}
func (s *CheckersS) TestDeepEquals(c *check.C) {
testInfo(c, check.DeepEquals, "DeepEquals", []string{"obtained", "expected"})
// The simplest.
testCheck(c, check.DeepEquals, true, "", 42, 42)
testCheck(c, check.DeepEquals, false, "", 42, 43)
// Different native types.
testCheck(c, check.DeepEquals, false, "", int32(42), int64(42))
// With nil.
testCheck(c, check.DeepEquals, false, "", 42, nil)
// Slices
testCheck(c, check.DeepEquals, true, "", []byte{1, 2}, []byte{1, 2})
testCheck(c, check.DeepEquals, false, "", []byte{1, 2}, []byte{1, 3})
// Struct values
testCheck(c, check.DeepEquals, true, "", simpleStruct{1}, simpleStruct{1})
testCheck(c, check.DeepEquals, false, "", simpleStruct{1}, simpleStruct{2})
// Struct pointers
testCheck(c, check.DeepEquals, true, "", &simpleStruct{1}, &simpleStruct{1})
testCheck(c, check.DeepEquals, false, "", &simpleStruct{1}, &simpleStruct{2})
}
func (s *CheckersS) TestHasLen(c *check.C) {
testInfo(c, check.HasLen, "HasLen", []string{"obtained", "n"})
testCheck(c, check.HasLen, true, "", "abcd", 4)
testCheck(c, check.HasLen, true, "", []int{1, 2}, 2)
testCheck(c, check.HasLen, false, "", []int{1, 2}, 3)
testCheck(c, check.HasLen, false, "n must be an int", []int{1, 2}, "2")
testCheck(c, check.HasLen, false, "obtained value type has no length", nil, 2)
}
func (s *CheckersS) TestErrorMatches(c *check.C) {
testInfo(c, check.ErrorMatches, "ErrorMatches", []string{"value", "regex"})
testCheck(c, check.ErrorMatches, false, "Error value is nil", nil, "some error")
testCheck(c, check.ErrorMatches, false, "Value is not an error", 1, "some error")
testCheck(c, check.ErrorMatches, true, "", errors.New("some error"), "some error")
testCheck(c, check.ErrorMatches, true, "", errors.New("some error"), "so.*or")
// Verify params mutation
params, names := testCheck(c, check.ErrorMatches, false, "", errors.New("some error"), "other error")
c.Assert(params[0], check.Equals, "some error")
c.Assert(names[0], check.Equals, "error")
}
func (s *CheckersS) TestMatches(c *check.C) {
testInfo(c, check.Matches, "Matches", []string{"value", "regex"})
// Simple matching
testCheck(c, check.Matches, true, "", "abc", "abc")
testCheck(c, check.Matches, true, "", "abc", "a.c")
// Must match fully
testCheck(c, check.Matches, false, "", "abc", "ab")
testCheck(c, check.Matches, false, "", "abc", "bc")
// String()-enabled values accepted
testCheck(c, check.Matches, true, "", reflect.ValueOf("abc"), "a.c")
testCheck(c, check.Matches, false, "", reflect.ValueOf("abc"), "a.d")
// Some error conditions.
testCheck(c, check.Matches, false, "Obtained value is not a string and has no .String()", 1, "a.c")
testCheck(c, check.Matches, false, "Can't compile regex: error parsing regexp: missing closing ]: `[c$`", "abc", "a[c")
}
func (s *CheckersS) TestPanics(c *check.C) {
testInfo(c, check.Panics, "Panics", []string{"function", "expected"})
// Some errors.
testCheck(c, check.Panics, false, "Function has not panicked", func() bool { return false }, "BOOM")
testCheck(c, check.Panics, false, "Function must take zero arguments", 1, "BOOM")
// Plain strings.
testCheck(c, check.Panics, true, "", func() { panic("BOOM") }, "BOOM")
testCheck(c, check.Panics, false, "", func() { panic("KABOOM") }, "BOOM")
testCheck(c, check.Panics, true, "", func() bool { panic("BOOM") }, "BOOM")
// Error values.
testCheck(c, check.Panics, true, "", func() { panic(errors.New("BOOM")) }, errors.New("BOOM"))
testCheck(c, check.Panics, false, "", func() { panic(errors.New("KABOOM")) }, errors.New("BOOM"))
type deep struct{ i int }
// Deep value
testCheck(c, check.Panics, true, "", func() { panic(&deep{99}) }, &deep{99})
// Verify params/names mutation
params, names := testCheck(c, check.Panics, false, "", func() { panic(errors.New("KABOOM")) }, errors.New("BOOM"))
c.Assert(params[0], check.ErrorMatches, "KABOOM")
c.Assert(names[0], check.Equals, "panic")
// Verify a nil panic
testCheck(c, check.Panics, true, "", func() { panic(nil) }, nil)
testCheck(c, check.Panics, false, "", func() { panic(nil) }, "NOPE")
}
func (s *CheckersS) TestPanicMatches(c *check.C) {
testInfo(c, check.PanicMatches, "PanicMatches", []string{"function", "expected"})
// Error matching.
testCheck(c, check.PanicMatches, true, "", func() { panic(errors.New("BOOM")) }, "BO.M")
testCheck(c, check.PanicMatches, false, "", func() { panic(errors.New("KABOOM")) }, "BO.M")
// Some errors.
testCheck(c, check.PanicMatches, false, "Function has not panicked", func() bool { return false }, "BOOM")
testCheck(c, check.PanicMatches, false, "Function must take zero arguments", 1, "BOOM")
// Plain strings.
testCheck(c, check.PanicMatches, true, "", func() { panic("BOOM") }, "BO.M")
testCheck(c, check.PanicMatches, false, "", func() { panic("KABOOM") }, "BOOM")
testCheck(c, check.PanicMatches, true, "", func() bool { panic("BOOM") }, "BO.M")
// Verify params/names mutation
params, names := testCheck(c, check.PanicMatches, false, "", func() { panic(errors.New("KABOOM")) }, "BOOM")
c.Assert(params[0], check.Equals, "KABOOM")
c.Assert(names[0], check.Equals, "panic")
// Verify a nil panic
testCheck(c, check.PanicMatches, false, "Panic value is not a string or an error", func() { panic(nil) }, "")
}
func (s *CheckersS) TestFitsTypeOf(c *check.C) {
testInfo(c, check.FitsTypeOf, "FitsTypeOf", []string{"obtained", "sample"})
// Basic types
testCheck(c, check.FitsTypeOf, true, "", 1, 0)
testCheck(c, check.FitsTypeOf, false, "", 1, int64(0))
// Aliases
testCheck(c, check.FitsTypeOf, false, "", 1, errors.New(""))
testCheck(c, check.FitsTypeOf, false, "", "error", errors.New(""))
testCheck(c, check.FitsTypeOf, true, "", errors.New("error"), errors.New(""))
// Structures
testCheck(c, check.FitsTypeOf, false, "", 1, simpleStruct{})
testCheck(c, check.FitsTypeOf, false, "", simpleStruct{42}, &simpleStruct{})
testCheck(c, check.FitsTypeOf, true, "", simpleStruct{42}, simpleStruct{})
testCheck(c, check.FitsTypeOf, true, "", &simpleStruct{42}, &simpleStruct{})
// Some bad values
testCheck(c, check.FitsTypeOf, false, "Invalid sample value", 1, interface{}(nil))
testCheck(c, check.FitsTypeOf, false, "", interface{}(nil), 0)
}
func (s *CheckersS) TestImplements(c *check.C) {
testInfo(c, check.Implements, "Implements", []string{"obtained", "ifaceptr"})
var e error
var re runtime.Error
testCheck(c, check.Implements, true, "", errors.New(""), &e)
testCheck(c, check.Implements, false, "", errors.New(""), &re)
// Some bad values
testCheck(c, check.Implements, false, "ifaceptr should be a pointer to an interface variable", 0, errors.New(""))
testCheck(c, check.Implements, false, "ifaceptr should be a pointer to an interface variable", 0, interface{}(nil))
testCheck(c, check.Implements, false, "", interface{}(nil), &e)
}

View File

@@ -1,9 +0,0 @@
package check
func PrintLine(filename string, line int) (string, error) {
return printLine(filename, line)
}
func Indent(s, with string) string {
return indent(s, with)
}

View File

@@ -1,484 +0,0 @@
// Tests for the behavior of the test fixture system.
package check_test
import (
. "github.com/minio/check"
)
// -----------------------------------------------------------------------
// Fixture test suite.
type FixtureS struct{}
var fixtureS = Suite(&FixtureS{})
func (s *FixtureS) TestCountSuite(c *C) {
suitesRun += 1
}
// -----------------------------------------------------------------------
// Basic fixture ordering verification.
func (s *FixtureS) TestOrder(c *C) {
helper := FixtureHelper{}
Run(&helper, nil)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Test2")
c.Check(helper.calls[6], Equals, "TearDownTest")
c.Check(helper.calls[7], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 8)
}
// -----------------------------------------------------------------------
// Check the behavior when panics occur within tests and fixtures.
func (s *FixtureS) TestPanicOnTest(c *C) {
helper := FixtureHelper{panicOn: "Test1"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Test2")
c.Check(helper.calls[6], Equals, "TearDownTest")
c.Check(helper.calls[7], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 8)
expected := "^\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: FixtureHelper.Test1\n\n" +
"\\.\\.\\. Panic: Test1 \\(PC=[xA-F0-9]+\\)\n\n" +
".+:[0-9]+\n" +
" in (go)?panic\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.trace\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.Test1\n" +
"(.|\n)*$"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnSetUpTest(c *C) {
helper := FixtureHelper{panicOn: "SetUpTest"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "TearDownTest")
c.Check(helper.calls[3], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 4)
expected := "^\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper\\.SetUpTest\n\n" +
"\\.\\.\\. Panic: SetUpTest \\(PC=[xA-F0-9]+\\)\n\n" +
".+:[0-9]+\n" +
" in (go)?panic\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.trace\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.SetUpTest\n" +
"(.|\n)*" +
"\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper\\.Test1\n\n" +
"\\.\\.\\. Panic: Fixture has panicked " +
"\\(see related PANIC\\)\n$"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnTearDownTest(c *C) {
helper := FixtureHelper{panicOn: "TearDownTest"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 5)
expected := "^\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper.TearDownTest\n\n" +
"\\.\\.\\. Panic: TearDownTest \\(PC=[xA-F0-9]+\\)\n\n" +
".+:[0-9]+\n" +
" in (go)?panic\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.trace\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.TearDownTest\n" +
"(.|\n)*" +
"\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper\\.Test1\n\n" +
"\\.\\.\\. Panic: Fixture has panicked " +
"\\(see related PANIC\\)\n$"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnSetUpSuite(c *C) {
helper := FixtureHelper{panicOn: "SetUpSuite"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 2)
expected := "^\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper.SetUpSuite\n\n" +
"\\.\\.\\. Panic: SetUpSuite \\(PC=[xA-F0-9]+\\)\n\n" +
".+:[0-9]+\n" +
" in (go)?panic\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.trace\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.SetUpSuite\n" +
"(.|\n)*$"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnTearDownSuite(c *C) {
helper := FixtureHelper{panicOn: "TearDownSuite"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Test2")
c.Check(helper.calls[6], Equals, "TearDownTest")
c.Check(helper.calls[7], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 8)
expected := "^\n-+\n" +
"PANIC: check_test\\.go:[0-9]+: " +
"FixtureHelper.TearDownSuite\n\n" +
"\\.\\.\\. Panic: TearDownSuite \\(PC=[xA-F0-9]+\\)\n\n" +
".+:[0-9]+\n" +
" in (go)?panic\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.trace\n" +
".*check_test.go:[0-9]+\n" +
" in FixtureHelper.TearDownSuite\n" +
"(.|\n)*$"
c.Check(output.value, Matches, expected)
}
// -----------------------------------------------------------------------
// A wrong argument on a test or fixture will produce a nice error.
func (s *FixtureS) TestPanicOnWrongTestArg(c *C) {
helper := WrongTestArgHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "TearDownTest")
c.Check(helper.calls[3], Equals, "SetUpTest")
c.Check(helper.calls[4], Equals, "Test2")
c.Check(helper.calls[5], Equals, "TearDownTest")
c.Check(helper.calls[6], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 7)
expected := "^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongTestArgHelper\\.Test1\n\n" +
"\\.\\.\\. Panic: WrongTestArgHelper\\.Test1 argument " +
"should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnWrongSetUpTestArg(c *C) {
helper := WrongSetUpTestArgHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(len(helper.calls), Equals, 0)
expected :=
"^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongSetUpTestArgHelper\\.SetUpTest\n\n" +
"\\.\\.\\. Panic: WrongSetUpTestArgHelper\\.SetUpTest argument " +
"should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnWrongSetUpSuiteArg(c *C) {
helper := WrongSetUpSuiteArgHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(len(helper.calls), Equals, 0)
expected :=
"^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongSetUpSuiteArgHelper\\.SetUpSuite\n\n" +
"\\.\\.\\. Panic: WrongSetUpSuiteArgHelper\\.SetUpSuite argument " +
"should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
// -----------------------------------------------------------------------
// Nice errors also when tests or fixture have wrong arg count.
func (s *FixtureS) TestPanicOnWrongTestArgCount(c *C) {
helper := WrongTestArgCountHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "TearDownTest")
c.Check(helper.calls[3], Equals, "SetUpTest")
c.Check(helper.calls[4], Equals, "Test2")
c.Check(helper.calls[5], Equals, "TearDownTest")
c.Check(helper.calls[6], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 7)
expected := "^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongTestArgCountHelper\\.Test1\n\n" +
"\\.\\.\\. Panic: WrongTestArgCountHelper\\.Test1 argument " +
"should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnWrongSetUpTestArgCount(c *C) {
helper := WrongSetUpTestArgCountHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(len(helper.calls), Equals, 0)
expected :=
"^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongSetUpTestArgCountHelper\\.SetUpTest\n\n" +
"\\.\\.\\. Panic: WrongSetUpTestArgCountHelper\\.SetUpTest argument " +
"should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
func (s *FixtureS) TestPanicOnWrongSetUpSuiteArgCount(c *C) {
helper := WrongSetUpSuiteArgCountHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(len(helper.calls), Equals, 0)
expected :=
"^\n-+\n" +
"PANIC: fixture_test\\.go:[0-9]+: " +
"WrongSetUpSuiteArgCountHelper\\.SetUpSuite\n\n" +
"\\.\\.\\. Panic: WrongSetUpSuiteArgCountHelper" +
"\\.SetUpSuite argument should be \\*check\\.C\n"
c.Check(output.value, Matches, expected)
}
// -----------------------------------------------------------------------
// Helper test suites with wrong function arguments.
type WrongTestArgHelper struct {
FixtureHelper
}
func (s *WrongTestArgHelper) Test1(t int) {
}
type WrongSetUpTestArgHelper struct {
FixtureHelper
}
func (s *WrongSetUpTestArgHelper) SetUpTest(t int) {
}
type WrongSetUpSuiteArgHelper struct {
FixtureHelper
}
func (s *WrongSetUpSuiteArgHelper) SetUpSuite(t int) {
}
type WrongTestArgCountHelper struct {
FixtureHelper
}
func (s *WrongTestArgCountHelper) Test1(c *C, i int) {
}
type WrongSetUpTestArgCountHelper struct {
FixtureHelper
}
func (s *WrongSetUpTestArgCountHelper) SetUpTest(c *C, i int) {
}
type WrongSetUpSuiteArgCountHelper struct {
FixtureHelper
}
func (s *WrongSetUpSuiteArgCountHelper) SetUpSuite(c *C, i int) {
}
// -----------------------------------------------------------------------
// Ensure fixture doesn't run without tests.
type NoTestsHelper struct {
hasRun bool
}
func (s *NoTestsHelper) SetUpSuite(c *C) {
s.hasRun = true
}
func (s *NoTestsHelper) TearDownSuite(c *C) {
s.hasRun = true
}
func (s *FixtureS) TestFixtureDoesntRunWithoutTests(c *C) {
helper := NoTestsHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Check(helper.hasRun, Equals, false)
}
// -----------------------------------------------------------------------
// Verify that checks and assertions work correctly inside the fixture.
type FixtureCheckHelper struct {
fail string
completed bool
}
func (s *FixtureCheckHelper) SetUpSuite(c *C) {
switch s.fail {
case "SetUpSuiteAssert":
c.Assert(false, Equals, true)
case "SetUpSuiteCheck":
c.Check(false, Equals, true)
}
s.completed = true
}
func (s *FixtureCheckHelper) SetUpTest(c *C) {
switch s.fail {
case "SetUpTestAssert":
c.Assert(false, Equals, true)
case "SetUpTestCheck":
c.Check(false, Equals, true)
}
s.completed = true
}
func (s *FixtureCheckHelper) Test(c *C) {
// Do nothing.
}
func (s *FixtureS) TestSetUpSuiteCheck(c *C) {
helper := FixtureCheckHelper{fail: "SetUpSuiteCheck"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Assert(output.value, Matches,
"\n---+\n"+
"FAIL: fixture_test\\.go:[0-9]+: "+
"FixtureCheckHelper\\.SetUpSuite\n\n"+
"fixture_test\\.go:[0-9]+:\n"+
" c\\.Check\\(false, Equals, true\\)\n"+
"\\.+ obtained bool = false\n"+
"\\.+ expected bool = true\n\n")
c.Assert(helper.completed, Equals, true)
}
func (s *FixtureS) TestSetUpSuiteAssert(c *C) {
helper := FixtureCheckHelper{fail: "SetUpSuiteAssert"}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Assert(output.value, Matches,
"\n---+\n"+
"FAIL: fixture_test\\.go:[0-9]+: "+
"FixtureCheckHelper\\.SetUpSuite\n\n"+
"fixture_test\\.go:[0-9]+:\n"+
" c\\.Assert\\(false, Equals, true\\)\n"+
"\\.+ obtained bool = false\n"+
"\\.+ expected bool = true\n\n")
c.Assert(helper.completed, Equals, false)
}
// -----------------------------------------------------------------------
// Verify that logging within SetUpTest() persists within the test log itself.
type FixtureLogHelper struct {
c *C
}
func (s *FixtureLogHelper) SetUpTest(c *C) {
s.c = c
c.Log("1")
}
func (s *FixtureLogHelper) Test(c *C) {
c.Log("2")
s.c.Log("3")
c.Log("4")
c.Fail()
}
func (s *FixtureLogHelper) TearDownTest(c *C) {
s.c.Log("5")
}
func (s *FixtureS) TestFixtureLogging(c *C) {
helper := FixtureLogHelper{}
output := String{}
Run(&helper, &RunConf{Output: &output})
c.Assert(output.value, Matches,
"\n---+\n"+
"FAIL: fixture_test\\.go:[0-9]+: "+
"FixtureLogHelper\\.Test\n\n"+
"1\n2\n3\n4\n5\n")
}
// -----------------------------------------------------------------------
// Skip() within fixture methods.
func (s *FixtureS) TestSkipSuite(c *C) {
helper := FixtureHelper{skip: true, skipOnN: 0}
output := String{}
result := Run(&helper, &RunConf{Output: &output})
c.Assert(output.value, Equals, "")
c.Assert(helper.calls[0], Equals, "SetUpSuite")
c.Assert(helper.calls[1], Equals, "TearDownSuite")
c.Assert(len(helper.calls), Equals, 2)
c.Assert(result.Skipped, Equals, 2)
}
func (s *FixtureS) TestSkipTest(c *C) {
helper := FixtureHelper{skip: true, skipOnN: 1}
output := String{}
result := Run(&helper, &RunConf{Output: &output})
c.Assert(helper.calls[0], Equals, "SetUpSuite")
c.Assert(helper.calls[1], Equals, "SetUpTest")
c.Assert(helper.calls[2], Equals, "SetUpTest")
c.Assert(helper.calls[3], Equals, "Test2")
c.Assert(helper.calls[4], Equals, "TearDownTest")
c.Assert(helper.calls[5], Equals, "TearDownSuite")
c.Assert(len(helper.calls), Equals, 6)
c.Assert(result.Skipped, Equals, 1)
}

View File

@@ -1,335 +0,0 @@
// These tests check that the foundations of gocheck are working properly.
// They already assume that fundamental failing is working already, though,
// since this was tested in bootstrap_test.go. Even then, some care may
// still have to be taken when using external functions, since they should
// of course not rely on functionality tested here.
package check_test
import (
"fmt"
"github.com/minio/check"
"log"
"os"
"regexp"
"strings"
)
// -----------------------------------------------------------------------
// Foundation test suite.
type FoundationS struct{}
var foundationS = check.Suite(&FoundationS{})
func (s *FoundationS) TestCountSuite(c *check.C) {
suitesRun += 1
}
func (s *FoundationS) TestErrorf(c *check.C) {
// Do not use checkState() here. It depends on Errorf() working.
expectedLog := fmt.Sprintf("foundation_test.go:%d:\n"+
" c.Errorf(\"Error %%v!\", \"message\")\n"+
"... Error: Error message!\n\n",
getMyLine()+1)
c.Errorf("Error %v!", "message")
failed := c.Failed()
c.Succeed()
if log := c.GetTestLog(); log != expectedLog {
c.Logf("Errorf() logged %#v rather than %#v", log, expectedLog)
c.Fail()
}
if !failed {
c.Logf("Errorf() didn't put the test in a failed state")
c.Fail()
}
}
func (s *FoundationS) TestError(c *check.C) {
expectedLog := fmt.Sprintf("foundation_test.go:%d:\n"+
" c\\.Error\\(\"Error \", \"message!\"\\)\n"+
"\\.\\.\\. Error: Error message!\n\n",
getMyLine()+1)
c.Error("Error ", "message!")
checkState(c, nil,
&expectedState{
name: "Error(`Error `, `message!`)",
failed: true,
log: expectedLog,
})
}
func (s *FoundationS) TestFailNow(c *check.C) {
defer (func() {
if !c.Failed() {
c.Error("FailNow() didn't fail the test")
} else {
c.Succeed()
if c.GetTestLog() != "" {
c.Error("Something got logged:\n" + c.GetTestLog())
}
}
})()
c.FailNow()
c.Log("FailNow() didn't stop the test")
}
func (s *FoundationS) TestSucceedNow(c *check.C) {
defer (func() {
if c.Failed() {
c.Error("SucceedNow() didn't succeed the test")
}
if c.GetTestLog() != "" {
c.Error("Something got logged:\n" + c.GetTestLog())
}
})()
c.Fail()
c.SucceedNow()
c.Log("SucceedNow() didn't stop the test")
}
func (s *FoundationS) TestFailureHeader(c *check.C) {
output := String{}
failHelper := FailHelper{}
check.Run(&failHelper, &check.RunConf{Output: &output})
header := fmt.Sprintf(""+
"\n-----------------------------------"+
"-----------------------------------\n"+
"FAIL: check_test.go:%d: FailHelper.TestLogAndFail\n",
failHelper.testLine)
if strings.Index(output.value, header) == -1 {
c.Errorf(""+
"Failure didn't print a proper header.\n"+
"... Got:\n%s... Expected something with:\n%s",
output.value, header)
}
}
func (s *FoundationS) TestFatal(c *check.C) {
var line int
defer (func() {
if !c.Failed() {
c.Error("Fatal() didn't fail the test")
} else {
c.Succeed()
expected := fmt.Sprintf("foundation_test.go:%d:\n"+
" c.Fatal(\"Die \", \"now!\")\n"+
"... Error: Die now!\n\n",
line)
if c.GetTestLog() != expected {
c.Error("Incorrect log:", c.GetTestLog())
}
}
})()
line = getMyLine() + 1
c.Fatal("Die ", "now!")
c.Log("Fatal() didn't stop the test")
}
func (s *FoundationS) TestFatalf(c *check.C) {
var line int
defer (func() {
if !c.Failed() {
c.Error("Fatalf() didn't fail the test")
} else {
c.Succeed()
expected := fmt.Sprintf("foundation_test.go:%d:\n"+
" c.Fatalf(\"Die %%s!\", \"now\")\n"+
"... Error: Die now!\n\n",
line)
if c.GetTestLog() != expected {
c.Error("Incorrect log:", c.GetTestLog())
}
}
})()
line = getMyLine() + 1
c.Fatalf("Die %s!", "now")
c.Log("Fatalf() didn't stop the test")
}
func (s *FoundationS) TestCallerLoggingInsideTest(c *check.C) {
log := fmt.Sprintf(""+
"foundation_test.go:%d:\n"+
" result := c.Check\\(10, check.Equals, 20\\)\n"+
"\\.\\.\\. obtained int = 10\n"+
"\\.\\.\\. expected int = 20\n\n",
getMyLine()+1)
result := c.Check(10, check.Equals, 20)
checkState(c, result,
&expectedState{
name: "Check(10, Equals, 20)",
result: false,
failed: true,
log: log,
})
}
func (s *FoundationS) TestCallerLoggingInDifferentFile(c *check.C) {
result, line := checkEqualWrapper(c, 10, 20)
testLine := getMyLine() - 1
log := fmt.Sprintf(""+
"foundation_test.go:%d:\n"+
" result, line := checkEqualWrapper\\(c, 10, 20\\)\n"+
"check_test.go:%d:\n"+
" return c.Check\\(obtained, check.Equals, expected\\), getMyLine\\(\\)\n"+
"\\.\\.\\. obtained int = 10\n"+
"\\.\\.\\. expected int = 20\n\n",
testLine, line)
checkState(c, result,
&expectedState{
name: "Check(10, Equals, 20)",
result: false,
failed: true,
log: log,
})
}
// -----------------------------------------------------------------------
// ExpectFailure() inverts the logic of failure.
type ExpectFailureSucceedHelper struct{}
func (s *ExpectFailureSucceedHelper) TestSucceed(c *check.C) {
c.ExpectFailure("It booms!")
c.Error("Boom!")
}
type ExpectFailureFailHelper struct{}
func (s *ExpectFailureFailHelper) TestFail(c *check.C) {
c.ExpectFailure("Bug #XYZ")
}
func (s *FoundationS) TestExpectFailureFail(c *check.C) {
helper := ExpectFailureFailHelper{}
output := String{}
result := check.Run(&helper, &check.RunConf{Output: &output})
expected := "" +
"^\n-+\n" +
"FAIL: foundation_test\\.go:[0-9]+:" +
" ExpectFailureFailHelper\\.TestFail\n\n" +
"\\.\\.\\. Error: Test succeeded, but was expected to fail\n" +
"\\.\\.\\. Reason: Bug #XYZ\n$"
matched, err := regexp.MatchString(expected, output.value)
if err != nil {
c.Error("Bad expression: ", expected)
} else if !matched {
c.Error("ExpectFailure() didn't log properly:\n", output.value)
}
c.Assert(result.ExpectedFailures, check.Equals, 0)
}
func (s *FoundationS) TestExpectFailureSucceed(c *check.C) {
helper := ExpectFailureSucceedHelper{}
output := String{}
result := check.Run(&helper, &check.RunConf{Output: &output})
c.Assert(output.value, check.Equals, "")
c.Assert(result.ExpectedFailures, check.Equals, 1)
}
func (s *FoundationS) TestExpectFailureSucceedVerbose(c *check.C) {
helper := ExpectFailureSucceedHelper{}
output := String{}
result := check.Run(&helper, &check.RunConf{Output: &output, Verbose: true})
expected := "" +
"FAIL EXPECTED: foundation_test\\.go:[0-9]+:" +
" ExpectFailureSucceedHelper\\.TestSucceed \\(It booms!\\)\t *[.0-9]+s\n"
matched, err := regexp.MatchString(expected, output.value)
if err != nil {
c.Error("Bad expression: ", expected)
} else if !matched {
c.Error("ExpectFailure() didn't log properly:\n", output.value)
}
c.Assert(result.ExpectedFailures, check.Equals, 1)
}
// -----------------------------------------------------------------------
// Skip() allows stopping a test without positive/negative results.
type SkipTestHelper struct{}
func (s *SkipTestHelper) TestFail(c *check.C) {
c.Skip("Wrong platform or whatever")
c.Error("Boom!")
}
func (s *FoundationS) TestSkip(c *check.C) {
helper := SkipTestHelper{}
output := String{}
check.Run(&helper, &check.RunConf{Output: &output})
if output.value != "" {
c.Error("Skip() logged something:\n", output.value)
}
}
func (s *FoundationS) TestSkipVerbose(c *check.C) {
helper := SkipTestHelper{}
output := String{}
check.Run(&helper, &check.RunConf{Output: &output, Verbose: true})
expected := "SKIP: foundation_test\\.go:[0-9]+: SkipTestHelper\\.TestFail" +
" \\(Wrong platform or whatever\\)"
matched, err := regexp.MatchString(expected, output.value)
if err != nil {
c.Error("Bad expression: ", expected)
} else if !matched {
c.Error("Skip() didn't log properly:\n", output.value)
}
}
// -----------------------------------------------------------------------
// Check minimum *log.Logger interface provided by *check.C.
type minLogger interface {
Output(calldepth int, s string) error
}
func (s *BootstrapS) TestMinLogger(c *check.C) {
var logger minLogger
logger = log.New(os.Stderr, "", 0)
logger = c
logger.Output(0, "Hello there")
expected := `\[LOG\] [0-9]+:[0-9][0-9]\.[0-9][0-9][0-9] +Hello there\n`
output := c.GetTestLog()
c.Assert(output, check.Matches, expected)
}
// -----------------------------------------------------------------------
// Ensure that suites with embedded types are working fine, including the
// the workaround for issue 906.
type EmbeddedInternalS struct {
called bool
}
type EmbeddedS struct {
EmbeddedInternalS
}
var embeddedS = check.Suite(&EmbeddedS{})
func (s *EmbeddedS) TestCountSuite(c *check.C) {
suitesRun += 1
}
func (s *EmbeddedInternalS) TestMethod(c *check.C) {
c.Error("TestMethod() of the embedded type was called!?")
}
func (s *EmbeddedS) TestMethod(c *check.C) {
// http://code.google.com/p/go/issues/detail?id=906
c.Check(s.called, check.Equals, false) // Go issue 906 is affecting the runner?
s.called = true
}

View File

@@ -1,519 +0,0 @@
// These tests verify the inner workings of the helper methods associated
// with check.T.
package check_test
import (
"github.com/minio/check"
"os"
"reflect"
"runtime"
"sync"
)
var helpersS = check.Suite(&HelpersS{})
type HelpersS struct{}
func (s *HelpersS) TestCountSuite(c *check.C) {
suitesRun += 1
}
// -----------------------------------------------------------------------
// Fake checker and bug info to verify the behavior of Assert() and Check().
type MyChecker struct {
info *check.CheckerInfo
params []interface{}
names []string
result bool
error string
}
func (checker *MyChecker) Info() *check.CheckerInfo {
if checker.info == nil {
return &check.CheckerInfo{Name: "MyChecker", Params: []string{"myobtained", "myexpected"}}
}
return checker.info
}
func (checker *MyChecker) Check(params []interface{}, names []string) (bool, string) {
rparams := checker.params
rnames := checker.names
checker.params = append([]interface{}{}, params...)
checker.names = append([]string{}, names...)
if rparams != nil {
copy(params, rparams)
}
if rnames != nil {
copy(names, rnames)
}
return checker.result, checker.error
}
type myCommentType string
func (c myCommentType) CheckCommentString() string {
return string(c)
}
func myComment(s string) myCommentType {
return myCommentType(s)
}
// -----------------------------------------------------------------------
// Ensure a real checker actually works fine.
func (s *HelpersS) TestCheckerInterface(c *check.C) {
testHelperSuccess(c, "Check(1, Equals, 1)", true, func() interface{} {
return c.Check(1, check.Equals, 1)
})
}
// -----------------------------------------------------------------------
// Tests for Check(), mostly the same as for Assert() following these.
func (s *HelpersS) TestCheckSucceedWithExpected(c *check.C) {
checker := &MyChecker{result: true}
testHelperSuccess(c, "Check(1, checker, 2)", true, func() interface{} {
return c.Check(1, checker, 2)
})
if !reflect.DeepEqual(checker.params, []interface{}{1, 2}) {
c.Fatalf("Bad params for check: %#v", checker.params)
}
}
func (s *HelpersS) TestCheckSucceedWithoutExpected(c *check.C) {
checker := &MyChecker{result: true, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
testHelperSuccess(c, "Check(1, checker)", true, func() interface{} {
return c.Check(1, checker)
})
if !reflect.DeepEqual(checker.params, []interface{}{1}) {
c.Fatalf("Bad params for check: %#v", checker.params)
}
}
func (s *HelpersS) TestCheckFailWithExpected(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, 2\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n\n"
testHelperFailure(c, "Check(1, checker, 2)", false, false, log,
func() interface{} {
return c.Check(1, checker, 2)
})
}
func (s *HelpersS) TestCheckFailWithExpectedAndComment(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, 2, myComment\\(\"Hello world!\"\\)\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n" +
"\\.+ Hello world!\n\n"
testHelperFailure(c, "Check(1, checker, 2, msg)", false, false, log,
func() interface{} {
return c.Check(1, checker, 2, myComment("Hello world!"))
})
}
func (s *HelpersS) TestCheckFailWithExpectedAndStaticComment(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" // Nice leading comment\\.\n" +
" return c\\.Check\\(1, checker, 2\\) // Hello there\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n\n"
testHelperFailure(c, "Check(1, checker, 2, msg)", false, false, log,
func() interface{} {
// Nice leading comment.
return c.Check(1, checker, 2) // Hello there
})
}
func (s *HelpersS) TestCheckFailWithoutExpected(c *check.C) {
checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker\\)\n" +
"\\.+ myvalue int = 1\n\n"
testHelperFailure(c, "Check(1, checker)", false, false, log,
func() interface{} {
return c.Check(1, checker)
})
}
func (s *HelpersS) TestCheckFailWithoutExpectedAndMessage(c *check.C) {
checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, myComment\\(\"Hello world!\"\\)\\)\n" +
"\\.+ myvalue int = 1\n" +
"\\.+ Hello world!\n\n"
testHelperFailure(c, "Check(1, checker, msg)", false, false, log,
func() interface{} {
return c.Check(1, checker, myComment("Hello world!"))
})
}
func (s *HelpersS) TestCheckWithMissingExpected(c *check.C) {
checker := &MyChecker{result: true}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker\\)\n" +
"\\.+ Check\\(myobtained, MyChecker, myexpected\\):\n" +
"\\.+ Wrong number of parameters for MyChecker: " +
"want 3, got 2\n\n"
testHelperFailure(c, "Check(1, checker, !?)", false, false, log,
func() interface{} {
return c.Check(1, checker)
})
}
func (s *HelpersS) TestCheckWithTooManyExpected(c *check.C) {
checker := &MyChecker{result: true}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, 2, 3\\)\n" +
"\\.+ Check\\(myobtained, MyChecker, myexpected\\):\n" +
"\\.+ Wrong number of parameters for MyChecker: " +
"want 3, got 4\n\n"
testHelperFailure(c, "Check(1, checker, 2, 3)", false, false, log,
func() interface{} {
return c.Check(1, checker, 2, 3)
})
}
func (s *HelpersS) TestCheckWithError(c *check.C) {
checker := &MyChecker{result: false, error: "Some not so cool data provided!"}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, 2\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n" +
"\\.+ Some not so cool data provided!\n\n"
testHelperFailure(c, "Check(1, checker, 2)", false, false, log,
func() interface{} {
return c.Check(1, checker, 2)
})
}
func (s *HelpersS) TestCheckWithNilChecker(c *check.C) {
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, nil\\)\n" +
"\\.+ Check\\(obtained, nil!\\?, \\.\\.\\.\\):\n" +
"\\.+ Oops\\.\\. you've provided a nil checker!\n\n"
testHelperFailure(c, "Check(obtained, nil)", false, false, log,
func() interface{} {
return c.Check(1, nil)
})
}
func (s *HelpersS) TestCheckWithParamsAndNamesMutation(c *check.C) {
checker := &MyChecker{result: false, params: []interface{}{3, 4}, names: []string{"newobtained", "newexpected"}}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" return c\\.Check\\(1, checker, 2\\)\n" +
"\\.+ newobtained int = 3\n" +
"\\.+ newexpected int = 4\n\n"
testHelperFailure(c, "Check(1, checker, 2) with mutation", false, false, log,
func() interface{} {
return c.Check(1, checker, 2)
})
}
// -----------------------------------------------------------------------
// Tests for Assert(), mostly the same as for Check() above.
func (s *HelpersS) TestAssertSucceedWithExpected(c *check.C) {
checker := &MyChecker{result: true}
testHelperSuccess(c, "Assert(1, checker, 2)", nil, func() interface{} {
c.Assert(1, checker, 2)
return nil
})
if !reflect.DeepEqual(checker.params, []interface{}{1, 2}) {
c.Fatalf("Bad params for check: %#v", checker.params)
}
}
func (s *HelpersS) TestAssertSucceedWithoutExpected(c *check.C) {
checker := &MyChecker{result: true, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
testHelperSuccess(c, "Assert(1, checker)", nil, func() interface{} {
c.Assert(1, checker)
return nil
})
if !reflect.DeepEqual(checker.params, []interface{}{1}) {
c.Fatalf("Bad params for check: %#v", checker.params)
}
}
func (s *HelpersS) TestAssertFailWithExpected(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker, 2\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n\n"
testHelperFailure(c, "Assert(1, checker, 2)", nil, true, log,
func() interface{} {
c.Assert(1, checker, 2)
return nil
})
}
func (s *HelpersS) TestAssertFailWithExpectedAndMessage(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker, 2, myComment\\(\"Hello world!\"\\)\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n" +
"\\.+ Hello world!\n\n"
testHelperFailure(c, "Assert(1, checker, 2, msg)", nil, true, log,
func() interface{} {
c.Assert(1, checker, 2, myComment("Hello world!"))
return nil
})
}
func (s *HelpersS) TestAssertFailWithoutExpected(c *check.C) {
checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker\\)\n" +
"\\.+ myvalue int = 1\n\n"
testHelperFailure(c, "Assert(1, checker)", nil, true, log,
func() interface{} {
c.Assert(1, checker)
return nil
})
}
func (s *HelpersS) TestAssertFailWithoutExpectedAndMessage(c *check.C) {
checker := &MyChecker{result: false, info: &check.CheckerInfo{Params: []string{"myvalue"}}}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker, myComment\\(\"Hello world!\"\\)\\)\n" +
"\\.+ myvalue int = 1\n" +
"\\.+ Hello world!\n\n"
testHelperFailure(c, "Assert(1, checker, msg)", nil, true, log,
func() interface{} {
c.Assert(1, checker, myComment("Hello world!"))
return nil
})
}
func (s *HelpersS) TestAssertWithMissingExpected(c *check.C) {
checker := &MyChecker{result: true}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker\\)\n" +
"\\.+ Assert\\(myobtained, MyChecker, myexpected\\):\n" +
"\\.+ Wrong number of parameters for MyChecker: " +
"want 3, got 2\n\n"
testHelperFailure(c, "Assert(1, checker, !?)", nil, true, log,
func() interface{} {
c.Assert(1, checker)
return nil
})
}
func (s *HelpersS) TestAssertWithError(c *check.C) {
checker := &MyChecker{result: false, error: "Some not so cool data provided!"}
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, checker, 2\\)\n" +
"\\.+ myobtained int = 1\n" +
"\\.+ myexpected int = 2\n" +
"\\.+ Some not so cool data provided!\n\n"
testHelperFailure(c, "Assert(1, checker, 2)", nil, true, log,
func() interface{} {
c.Assert(1, checker, 2)
return nil
})
}
func (s *HelpersS) TestAssertWithNilChecker(c *check.C) {
log := "(?s)helpers_test\\.go:[0-9]+:.*\nhelpers_test\\.go:[0-9]+:\n" +
" c\\.Assert\\(1, nil\\)\n" +
"\\.+ Assert\\(obtained, nil!\\?, \\.\\.\\.\\):\n" +
"\\.+ Oops\\.\\. you've provided a nil checker!\n\n"
testHelperFailure(c, "Assert(obtained, nil)", nil, true, log,
func() interface{} {
c.Assert(1, nil)
return nil
})
}
// -----------------------------------------------------------------------
// Ensure that values logged work properly in some interesting cases.
func (s *HelpersS) TestValueLoggingWithArrays(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
" return c\\.Check\\(\\[\\]byte{1, 2}, checker, \\[\\]byte{1, 3}\\)\n" +
"\\.+ myobtained \\[\\]uint8 = \\[\\]byte{0x1, 0x2}\n" +
"\\.+ myexpected \\[\\]uint8 = \\[\\]byte{0x1, 0x3}\n\n"
testHelperFailure(c, "Check([]byte{1}, chk, []byte{3})", false, false, log,
func() interface{} {
return c.Check([]byte{1, 2}, checker, []byte{1, 3})
})
}
func (s *HelpersS) TestValueLoggingWithMultiLine(c *check.C) {
checker := &MyChecker{result: false}
log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
" return c\\.Check\\(\"a\\\\nb\\\\n\", checker, \"a\\\\nb\\\\nc\"\\)\n" +
"\\.+ myobtained string = \"\" \\+\n" +
"\\.+ \"a\\\\n\" \\+\n" +
"\\.+ \"b\\\\n\"\n" +
"\\.+ myexpected string = \"\" \\+\n" +
"\\.+ \"a\\\\n\" \\+\n" +
"\\.+ \"b\\\\n\" \\+\n" +
"\\.+ \"c\"\n\n"
testHelperFailure(c, `Check("a\nb\n", chk, "a\nb\nc")`, false, false, log,
func() interface{} {
return c.Check("a\nb\n", checker, "a\nb\nc")
})
}
func (s *HelpersS) TestValueLoggingWithMultiLineException(c *check.C) {
// If the newline is at the end of the string, don't log as multi-line.
checker := &MyChecker{result: false}
log := "(?s)helpers_test.go:[0-9]+:.*\nhelpers_test.go:[0-9]+:\n" +
" return c\\.Check\\(\"a b\\\\n\", checker, \"a\\\\nb\"\\)\n" +
"\\.+ myobtained string = \"a b\\\\n\"\n" +
"\\.+ myexpected string = \"\" \\+\n" +
"\\.+ \"a\\\\n\" \\+\n" +
"\\.+ \"b\"\n\n"
testHelperFailure(c, `Check("a b\n", chk, "a\nb")`, false, false, log,
func() interface{} {
return c.Check("a b\n", checker, "a\nb")
})
}
// -----------------------------------------------------------------------
// MakeDir() tests.
type MkDirHelper struct {
path1 string
path2 string
isDir1 bool
isDir2 bool
isDir3 bool
isDir4 bool
}
func (s *MkDirHelper) SetUpSuite(c *check.C) {
s.path1 = c.MkDir()
s.isDir1 = isDir(s.path1)
}
func (s *MkDirHelper) Test(c *check.C) {
s.path2 = c.MkDir()
s.isDir2 = isDir(s.path2)
}
func (s *MkDirHelper) TearDownSuite(c *check.C) {
s.isDir3 = isDir(s.path1)
s.isDir4 = isDir(s.path2)
}
func (s *HelpersS) TestMkDir(c *check.C) {
helper := MkDirHelper{}
output := String{}
check.Run(&helper, &check.RunConf{Output: &output})
c.Assert(output.value, check.Equals, "")
c.Check(helper.isDir1, check.Equals, true)
c.Check(helper.isDir2, check.Equals, true)
c.Check(helper.isDir3, check.Equals, true)
c.Check(helper.isDir4, check.Equals, true)
c.Check(helper.path1, check.Not(check.Equals),
helper.path2)
c.Check(isDir(helper.path1), check.Equals, false)
c.Check(isDir(helper.path2), check.Equals, false)
}
func isDir(path string) bool {
if stat, err := os.Stat(path); err == nil {
return stat.IsDir()
}
return false
}
// Concurrent logging should not corrupt the underling buffer.
// Use go test -race to detect the race in this test.
func (s *HelpersS) TestConcurrentLogging(c *check.C) {
defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(runtime.NumCPU()))
var start, stop sync.WaitGroup
start.Add(1)
for i, n := 0, runtime.NumCPU()*2; i < n; i++ {
stop.Add(1)
go func(i int) {
start.Wait()
for j := 0; j < 30; j++ {
c.Logf("Worker %d: line %d", i, j)
}
stop.Done()
}(i)
}
start.Done()
stop.Wait()
}
// -----------------------------------------------------------------------
// Test the TestName function
type TestNameHelper struct {
name1 string
name2 string
name3 string
name4 string
name5 string
}
func (s *TestNameHelper) SetUpSuite(c *check.C) { s.name1 = c.TestName() }
func (s *TestNameHelper) SetUpTest(c *check.C) { s.name2 = c.TestName() }
func (s *TestNameHelper) Test(c *check.C) { s.name3 = c.TestName() }
func (s *TestNameHelper) TearDownTest(c *check.C) { s.name4 = c.TestName() }
func (s *TestNameHelper) TearDownSuite(c *check.C) { s.name5 = c.TestName() }
func (s *HelpersS) TestTestName(c *check.C) {
helper := TestNameHelper{}
output := String{}
check.Run(&helper, &check.RunConf{Output: &output})
c.Check(helper.name1, check.Equals, "")
c.Check(helper.name2, check.Equals, "TestNameHelper.Test")
c.Check(helper.name3, check.Equals, "TestNameHelper.Test")
c.Check(helper.name4, check.Equals, "TestNameHelper.Test")
c.Check(helper.name5, check.Equals, "")
}
// -----------------------------------------------------------------------
// A couple of helper functions to test helper functions. :-)
func testHelperSuccess(c *check.C, name string, expectedResult interface{}, closure func() interface{}) {
var result interface{}
defer (func() {
if err := recover(); err != nil {
panic(err)
}
checkState(c, result,
&expectedState{
name: name,
result: expectedResult,
failed: false,
log: "",
})
})()
result = closure()
}
func testHelperFailure(c *check.C, name string, expectedResult interface{}, shouldStop bool, log string, closure func() interface{}) {
var result interface{}
defer (func() {
if err := recover(); err != nil {
panic(err)
}
checkState(c, result,
&expectedState{
name: name,
result: expectedResult,
failed: true,
log: log,
})
})()
result = closure()
if shouldStop {
c.Logf("%s didn't stop when it should", name)
}
}

View File

@@ -1,110 +0,0 @@
package check_test
import (
. "github.com/minio/check"
)
var _ = Suite(&PrinterS{})
type PrinterS struct{}
func (s *PrinterS) TestCountSuite(c *C) {
suitesRun += 1
}
var printTestFuncLine int
func init() {
printTestFuncLine = getMyLine() + 3
}
func printTestFunc() {
println(1) // Comment1
if 2 == 2 { // Comment2
println(3) // Comment3
}
switch 5 {
case 6:
println(6) // Comment6
println(7)
}
switch interface{}(9).(type) { // Comment9
case int:
println(10)
println(11)
}
select {
case <-(chan bool)(nil):
println(14)
println(15)
default:
println(16)
println(17)
}
println(19,
20)
_ = func() {
println(21)
println(22)
}
println(24, func() {
println(25)
})
// Leading comment
// with multiple lines.
println(29) // Comment29
}
var printLineTests = []struct {
line int
output string
}{
{1, "println(1) // Comment1"},
{2, "if 2 == 2 { // Comment2\n ...\n}"},
{3, "println(3) // Comment3"},
{5, "switch 5 {\n...\n}"},
{6, "case 6:\n println(6) // Comment6\n ..."},
{7, "println(7)"},
{9, "switch interface{}(9).(type) { // Comment9\n...\n}"},
{10, "case int:\n println(10)\n ..."},
{14, "case <-(chan bool)(nil):\n println(14)\n ..."},
{15, "println(15)"},
{16, "default:\n println(16)\n ..."},
{17, "println(17)"},
{19, "println(19,\n 20)"},
{20, "println(19,\n 20)"},
{21, "_ = func() {\n println(21)\n println(22)\n}"},
{22, "println(22)"},
{24, "println(24, func() {\n println(25)\n})"},
{25, "println(25)"},
{26, "println(24, func() {\n println(25)\n})"},
{29, "// Leading comment\n// with multiple lines.\nprintln(29) // Comment29"},
}
// reformat broke test lines above
//func (s *PrinterS) TestPrintLine(c *C) {
// for _, test := range printLineTests {
// output, err := PrintLine("printer_test.go", printTestFuncLine+test.line)
// c.Assert(err, IsNil)
// c.Assert(output, Equals, test.output)
// }
//}
var indentTests = []struct {
in, out string
}{
{"", ""},
{"\n", "\n"},
{"a", ">>>a"},
{"a\n", ">>>a\n"},
{"a\nb", ">>>a\n>>>b"},
{" ", ">>> "},
}
func (s *PrinterS) TestIndent(c *C) {
for _, test := range indentTests {
out := Indent(test.in, ">>>")
c.Assert(out, Equals, test.out)
}
}

View File

@@ -1,419 +0,0 @@
// These tests verify the test running logic.
package check_test
import (
"errors"
. "github.com/minio/check"
"os"
"sync"
)
var runnerS = Suite(&RunS{})
type RunS struct{}
func (s *RunS) TestCountSuite(c *C) {
suitesRun += 1
}
// -----------------------------------------------------------------------
// Tests ensuring result counting works properly.
func (s *RunS) TestSuccess(c *C) {
output := String{}
result := Run(&SuccessHelper{}, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 1)
c.Check(result.Failed, Equals, 0)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 0)
c.Check(result.FixturePanicked, Equals, 0)
c.Check(result.Missed, Equals, 0)
c.Check(result.RunError, IsNil)
}
func (s *RunS) TestFailure(c *C) {
output := String{}
result := Run(&FailHelper{}, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 0)
c.Check(result.Failed, Equals, 1)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 0)
c.Check(result.FixturePanicked, Equals, 0)
c.Check(result.Missed, Equals, 0)
c.Check(result.RunError, IsNil)
}
func (s *RunS) TestFixture(c *C) {
output := String{}
result := Run(&FixtureHelper{}, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 2)
c.Check(result.Failed, Equals, 0)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 0)
c.Check(result.FixturePanicked, Equals, 0)
c.Check(result.Missed, Equals, 0)
c.Check(result.RunError, IsNil)
}
func (s *RunS) TestPanicOnTest(c *C) {
output := String{}
helper := &FixtureHelper{panicOn: "Test1"}
result := Run(helper, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 1)
c.Check(result.Failed, Equals, 0)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 1)
c.Check(result.FixturePanicked, Equals, 0)
c.Check(result.Missed, Equals, 0)
c.Check(result.RunError, IsNil)
}
func (s *RunS) TestPanicOnSetUpTest(c *C) {
output := String{}
helper := &FixtureHelper{panicOn: "SetUpTest"}
result := Run(helper, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 0)
c.Check(result.Failed, Equals, 0)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 0)
c.Check(result.FixturePanicked, Equals, 1)
c.Check(result.Missed, Equals, 2)
c.Check(result.RunError, IsNil)
}
func (s *RunS) TestPanicOnSetUpSuite(c *C) {
output := String{}
helper := &FixtureHelper{panicOn: "SetUpSuite"}
result := Run(helper, &RunConf{Output: &output})
c.Check(result.Succeeded, Equals, 0)
c.Check(result.Failed, Equals, 0)
c.Check(result.Skipped, Equals, 0)
c.Check(result.Panicked, Equals, 0)
c.Check(result.FixturePanicked, Equals, 1)
c.Check(result.Missed, Equals, 2)
c.Check(result.RunError, IsNil)
}
// -----------------------------------------------------------------------
// Check result aggregation.
func (s *RunS) TestAdd(c *C) {
result := &Result{
Succeeded: 1,
Skipped: 2,
Failed: 3,
Panicked: 4,
FixturePanicked: 5,
Missed: 6,
ExpectedFailures: 7,
}
result.Add(&Result{
Succeeded: 10,
Skipped: 20,
Failed: 30,
Panicked: 40,
FixturePanicked: 50,
Missed: 60,
ExpectedFailures: 70,
})
c.Check(result.Succeeded, Equals, 11)
c.Check(result.Skipped, Equals, 22)
c.Check(result.Failed, Equals, 33)
c.Check(result.Panicked, Equals, 44)
c.Check(result.FixturePanicked, Equals, 55)
c.Check(result.Missed, Equals, 66)
c.Check(result.ExpectedFailures, Equals, 77)
c.Check(result.RunError, IsNil)
}
// -----------------------------------------------------------------------
// Check the Passed() method.
func (s *RunS) TestPassed(c *C) {
c.Assert((&Result{}).Passed(), Equals, true)
c.Assert((&Result{Succeeded: 1}).Passed(), Equals, true)
c.Assert((&Result{Skipped: 1}).Passed(), Equals, true)
c.Assert((&Result{Failed: 1}).Passed(), Equals, false)
c.Assert((&Result{Panicked: 1}).Passed(), Equals, false)
c.Assert((&Result{FixturePanicked: 1}).Passed(), Equals, false)
c.Assert((&Result{Missed: 1}).Passed(), Equals, false)
c.Assert((&Result{RunError: errors.New("!")}).Passed(), Equals, false)
}
// -----------------------------------------------------------------------
// Check that result printing is working correctly.
func (s *RunS) TestPrintSuccess(c *C) {
result := &Result{Succeeded: 5}
c.Check(result.String(), Equals, "OK: 5 passed")
}
func (s *RunS) TestPrintFailure(c *C) {
result := &Result{Failed: 5}
c.Check(result.String(), Equals, "OOPS: 0 passed, 5 FAILED")
}
func (s *RunS) TestPrintSkipped(c *C) {
result := &Result{Skipped: 5}
c.Check(result.String(), Equals, "OK: 0 passed, 5 skipped")
}
func (s *RunS) TestPrintExpectedFailures(c *C) {
result := &Result{ExpectedFailures: 5}
c.Check(result.String(), Equals, "OK: 0 passed, 5 expected failures")
}
func (s *RunS) TestPrintPanicked(c *C) {
result := &Result{Panicked: 5}
c.Check(result.String(), Equals, "OOPS: 0 passed, 5 PANICKED")
}
func (s *RunS) TestPrintFixturePanicked(c *C) {
result := &Result{FixturePanicked: 5}
c.Check(result.String(), Equals, "OOPS: 0 passed, 5 FIXTURE-PANICKED")
}
func (s *RunS) TestPrintMissed(c *C) {
result := &Result{Missed: 5}
c.Check(result.String(), Equals, "OOPS: 0 passed, 5 MISSED")
}
func (s *RunS) TestPrintAll(c *C) {
result := &Result{Succeeded: 1, Skipped: 2, ExpectedFailures: 3,
Panicked: 4, FixturePanicked: 5, Missed: 6}
c.Check(result.String(), Equals,
"OOPS: 1 passed, 2 skipped, 3 expected failures, 4 PANICKED, "+
"5 FIXTURE-PANICKED, 6 MISSED")
}
func (s *RunS) TestPrintRunError(c *C) {
result := &Result{Succeeded: 1, Failed: 1,
RunError: errors.New("Kaboom!")}
c.Check(result.String(), Equals, "ERROR: Kaboom!")
}
// -----------------------------------------------------------------------
// Verify that the method pattern flag works correctly.
func (s *RunS) TestFilterTestName(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "Test[91]"}
Run(&helper, &runConf)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 5)
}
func (s *RunS) TestFilterTestNameWithAll(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: ".*"}
Run(&helper, &runConf)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Test2")
c.Check(helper.calls[6], Equals, "TearDownTest")
c.Check(helper.calls[7], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 8)
}
func (s *RunS) TestFilterSuiteName(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "FixtureHelper"}
Run(&helper, &runConf)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test1")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "SetUpTest")
c.Check(helper.calls[5], Equals, "Test2")
c.Check(helper.calls[6], Equals, "TearDownTest")
c.Check(helper.calls[7], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 8)
}
func (s *RunS) TestFilterSuiteNameAndTestName(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "FixtureHelper\\.Test2"}
Run(&helper, &runConf)
c.Check(helper.calls[0], Equals, "SetUpSuite")
c.Check(helper.calls[1], Equals, "SetUpTest")
c.Check(helper.calls[2], Equals, "Test2")
c.Check(helper.calls[3], Equals, "TearDownTest")
c.Check(helper.calls[4], Equals, "TearDownSuite")
c.Check(len(helper.calls), Equals, 5)
}
func (s *RunS) TestFilterAllOut(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "NotFound"}
Run(&helper, &runConf)
c.Check(len(helper.calls), Equals, 0)
}
func (s *RunS) TestRequirePartialMatch(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "est"}
Run(&helper, &runConf)
c.Check(len(helper.calls), Equals, 8)
}
func (s *RunS) TestFilterError(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Filter: "]["}
result := Run(&helper, &runConf)
c.Check(result.String(), Equals,
"ERROR: Bad filter expression: error parsing regexp: missing closing ]: `[`")
c.Check(len(helper.calls), Equals, 0)
}
// -----------------------------------------------------------------------
// Verify that List works correctly.
func (s *RunS) TestListFiltered(c *C) {
names := List(&FixtureHelper{}, &RunConf{Filter: "1"})
c.Assert(names, DeepEquals, []string{
"FixtureHelper.Test1",
})
}
func (s *RunS) TestList(c *C) {
names := List(&FixtureHelper{}, &RunConf{})
c.Assert(names, DeepEquals, []string{
"FixtureHelper.Test1",
"FixtureHelper.Test2",
})
}
// -----------------------------------------------------------------------
// Verify that verbose mode prints tests which pass as well.
func (s *RunS) TestVerboseMode(c *C) {
helper := FixtureHelper{}
output := String{}
runConf := RunConf{Output: &output, Verbose: true}
Run(&helper, &runConf)
expected := "PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test1\t *[.0-9]+s\n" +
"PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t *[.0-9]+s\n"
c.Assert(output.value, Matches, expected)
}
func (s *RunS) TestVerboseModeWithFailBeforePass(c *C) {
helper := FixtureHelper{panicOn: "Test1"}
output := String{}
runConf := RunConf{Output: &output, Verbose: true}
Run(&helper, &runConf)
expected := "(?s).*PANIC.*\n-+\n" + // Should have an extra line.
"PASS: check_test\\.go:[0-9]+: FixtureHelper\\.Test2\t *[.0-9]+s\n"
c.Assert(output.value, Matches, expected)
}
// -----------------------------------------------------------------------
// Verify the stream output mode. In this mode there's no output caching.
type StreamHelper struct {
l2 sync.Mutex
l3 sync.Mutex
}
func (s *StreamHelper) SetUpSuite(c *C) {
c.Log("0")
}
func (s *StreamHelper) Test1(c *C) {
c.Log("1")
s.l2.Lock()
s.l3.Lock()
go func() {
s.l2.Lock() // Wait for "2".
c.Log("3")
s.l3.Unlock()
}()
}
func (s *StreamHelper) Test2(c *C) {
c.Log("2")
s.l2.Unlock()
s.l3.Lock() // Wait for "3".
c.Fail()
c.Log("4")
}
func (s *RunS) TestStreamMode(c *C) {
helper := &StreamHelper{}
output := String{}
runConf := RunConf{Output: &output, Stream: true}
Run(helper, &runConf)
expected := "START: run_test\\.go:[0-9]+: StreamHelper\\.SetUpSuite\n0\n" +
"PASS: run_test\\.go:[0-9]+: StreamHelper\\.SetUpSuite\t *[.0-9]+s\n\n" +
"START: run_test\\.go:[0-9]+: StreamHelper\\.Test1\n1\n" +
"PASS: run_test\\.go:[0-9]+: StreamHelper\\.Test1\t *[.0-9]+s\n\n" +
"START: run_test\\.go:[0-9]+: StreamHelper\\.Test2\n2\n3\n4\n" +
"FAIL: run_test\\.go:[0-9]+: StreamHelper\\.Test2\n\n"
c.Assert(output.value, Matches, expected)
}
type StreamMissHelper struct{}
func (s *StreamMissHelper) SetUpSuite(c *C) {
c.Log("0")
c.Fail()
}
func (s *StreamMissHelper) Test1(c *C) {
c.Log("1")
}
func (s *RunS) TestStreamModeWithMiss(c *C) {
helper := &StreamMissHelper{}
output := String{}
runConf := RunConf{Output: &output, Stream: true}
Run(helper, &runConf)
expected := "START: run_test\\.go:[0-9]+: StreamMissHelper\\.SetUpSuite\n0\n" +
"FAIL: run_test\\.go:[0-9]+: StreamMissHelper\\.SetUpSuite\n\n" +
"START: run_test\\.go:[0-9]+: StreamMissHelper\\.Test1\n" +
"MISS: run_test\\.go:[0-9]+: StreamMissHelper\\.Test1\n\n"
c.Assert(output.value, Matches, expected)
}
// -----------------------------------------------------------------------
// Verify that that the keep work dir request indeed does so.
type WorkDirSuite struct{}
func (s *WorkDirSuite) Test(c *C) {
c.MkDir()
}
func (s *RunS) TestKeepWorkDir(c *C) {
output := String{}
runConf := RunConf{Output: &output, Verbose: true, KeepWorkDir: true}
result := Run(&WorkDirSuite{}, &runConf)
c.Assert(result.String(), Matches, ".*\nWORK="+result.WorkDir)
stat, err := os.Stat(result.WorkDir)
c.Assert(err, IsNil)
c.Assert(stat.IsDir(), Equals, true)
}

View File

@@ -1,6 +0,0 @@
language: go
go: 1.1
script:
- go vet ./...
- go test -v ./...

View File

@@ -1,13 +0,0 @@
#! /bin/bash
_cli_bash_autocomplete() {
local cur prev opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
}
complete -F _cli_bash_autocomplete $PROG

View File

@@ -1,5 +0,0 @@
autoload -U compinit && compinit
autoload -U bashcompinit && bashcompinit
script_dir=$(dirname $0)
source ${script_dir}/bash_autocomplete

View File

@@ -1,22 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe

View File

@@ -1,3 +0,0 @@
# objx
* Jump into the [API Documentation](http://godoc.org/github.com/stretchr/objx)

View File

@@ -1,179 +0,0 @@
package objx
import (
"fmt"
"regexp"
"strconv"
"strings"
)
// arrayAccesRegexString is the regex used to extract the array number
// from the access path
const arrayAccesRegexString = `^(.+)\[([0-9]+)\]$`
// arrayAccesRegex is the compiled arrayAccesRegexString
var arrayAccesRegex = regexp.MustCompile(arrayAccesRegexString)
// Get gets the value using the specified selector and
// returns it inside a new Obj object.
//
// If it cannot find the value, Get will return a nil
// value inside an instance of Obj.
//
// Get can only operate directly on map[string]interface{} and []interface.
//
// Example
//
// To access the title of the third chapter of the second book, do:
//
// o.Get("books[1].chapters[2].title")
func (m Map) Get(selector string) *Value {
rawObj := access(m, selector, nil, false, false)
return &Value{data: rawObj}
}
// Set sets the value using the specified selector and
// returns the object on which Set was called.
//
// Set can only operate directly on map[string]interface{} and []interface
//
// Example
//
// To set the title of the third chapter of the second book, do:
//
// o.Set("books[1].chapters[2].title","Time to Go")
func (m Map) Set(selector string, value interface{}) Map {
access(m, selector, value, true, false)
return m
}
// access accesses the object using the selector and performs the
// appropriate action.
func access(current, selector, value interface{}, isSet, panics bool) interface{} {
switch selector.(type) {
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
if array, ok := current.([]interface{}); ok {
index := intFromInterface(selector)
if index >= len(array) {
if panics {
panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array)))
}
return nil
}
return array[index]
}
return nil
case string:
selStr := selector.(string)
selSegs := strings.SplitN(selStr, PathSeparator, 2)
thisSel := selSegs[0]
index := -1
var err error
// https://github.com/stretchr/objx/issues/12
if strings.Contains(thisSel, "[") {
arrayMatches := arrayAccesRegex.FindStringSubmatch(thisSel)
if len(arrayMatches) > 0 {
// Get the key into the map
thisSel = arrayMatches[1]
// Get the index into the array at the key
index, err = strconv.Atoi(arrayMatches[2])
if err != nil {
// This should never happen. If it does, something has gone
// seriously wrong. Panic.
panic("objx: Array index is not an integer. Must use array[int].")
}
}
}
if curMap, ok := current.(Map); ok {
current = map[string]interface{}(curMap)
}
// get the object in question
switch current.(type) {
case map[string]interface{}:
curMSI := current.(map[string]interface{})
if len(selSegs) <= 1 && isSet {
curMSI[thisSel] = value
return nil
} else {
current = curMSI[thisSel]
}
default:
current = nil
}
if current == nil && panics {
panic(fmt.Sprintf("objx: '%v' invalid on object.", selector))
}
// do we need to access the item of an array?
if index > -1 {
if array, ok := current.([]interface{}); ok {
if index < len(array) {
current = array[index]
} else {
if panics {
panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array)))
}
current = nil
}
}
}
if len(selSegs) > 1 {
current = access(current, selSegs[1], value, isSet, panics)
}
}
return current
}
// intFromInterface converts an interface object to the largest
// representation of an unsigned integer using a type switch and
// assertions
func intFromInterface(selector interface{}) int {
var value int
switch selector.(type) {
case int:
value = selector.(int)
case int8:
value = int(selector.(int8))
case int16:
value = int(selector.(int16))
case int32:
value = int(selector.(int32))
case int64:
value = int(selector.(int64))
case uint:
value = int(selector.(uint))
case uint8:
value = int(selector.(uint8))
case uint16:
value = int(selector.(uint16))
case uint32:
value = int(selector.(uint32))
case uint64:
value = int(selector.(uint64))
default:
panic("objx: array access argument is not an integer type (this should never happen)")
}
return value
}

View File

@@ -1,145 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestAccessorsAccessGetSingleField(t *testing.T) {
current := map[string]interface{}{"name": "Tyler"}
assert.Equal(t, "Tyler", access(current, "name", nil, false, true))
}
func TestAccessorsAccessGetDeep(t *testing.T) {
current := map[string]interface{}{"name": map[string]interface{}{"first": "Tyler", "last": "Bunnell"}}
assert.Equal(t, "Tyler", access(current, "name.first", nil, false, true))
assert.Equal(t, "Bunnell", access(current, "name.last", nil, false, true))
}
func TestAccessorsAccessGetDeepDeep(t *testing.T) {
current := map[string]interface{}{"one": map[string]interface{}{"two": map[string]interface{}{"three": map[string]interface{}{"four": 4}}}}
assert.Equal(t, 4, access(current, "one.two.three.four", nil, false, true))
}
func TestAccessorsAccessGetInsideArray(t *testing.T) {
current := map[string]interface{}{"names": []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}}
assert.Equal(t, "Tyler", access(current, "names[0].first", nil, false, true))
assert.Equal(t, "Bunnell", access(current, "names[0].last", nil, false, true))
assert.Equal(t, "Capitol", access(current, "names[1].first", nil, false, true))
assert.Equal(t, "Bollocks", access(current, "names[1].last", nil, false, true))
assert.Panics(t, func() {
access(current, "names[2]", nil, false, true)
})
assert.Nil(t, access(current, "names[2]", nil, false, false))
}
func TestAccessorsAccessGetFromArrayWithInt(t *testing.T) {
current := []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}
one := access(current, 0, nil, false, false)
two := access(current, 1, nil, false, false)
three := access(current, 2, nil, false, false)
assert.Equal(t, "Tyler", one.(map[string]interface{})["first"])
assert.Equal(t, "Capitol", two.(map[string]interface{})["first"])
assert.Nil(t, three)
}
func TestAccessorsGet(t *testing.T) {
current := New(map[string]interface{}{"name": "Tyler"})
assert.Equal(t, "Tyler", current.Get("name").data)
}
func TestAccessorsAccessSetSingleField(t *testing.T) {
current := map[string]interface{}{"name": "Tyler"}
access(current, "name", "Mat", true, false)
assert.Equal(t, current["name"], "Mat")
access(current, "age", 29, true, true)
assert.Equal(t, current["age"], 29)
}
func TestAccessorsAccessSetSingleFieldNotExisting(t *testing.T) {
current := map[string]interface{}{}
access(current, "name", "Mat", true, false)
assert.Equal(t, current["name"], "Mat")
}
func TestAccessorsAccessSetDeep(t *testing.T) {
current := map[string]interface{}{"name": map[string]interface{}{"first": "Tyler", "last": "Bunnell"}}
access(current, "name.first", "Mat", true, true)
access(current, "name.last", "Ryer", true, true)
assert.Equal(t, "Mat", access(current, "name.first", nil, false, true))
assert.Equal(t, "Ryer", access(current, "name.last", nil, false, true))
}
func TestAccessorsAccessSetDeepDeep(t *testing.T) {
current := map[string]interface{}{"one": map[string]interface{}{"two": map[string]interface{}{"three": map[string]interface{}{"four": 4}}}}
access(current, "one.two.three.four", 5, true, true)
assert.Equal(t, 5, access(current, "one.two.three.four", nil, false, true))
}
func TestAccessorsAccessSetArray(t *testing.T) {
current := map[string]interface{}{"names": []interface{}{"Tyler"}}
access(current, "names[0]", "Mat", true, true)
assert.Equal(t, "Mat", access(current, "names[0]", nil, false, true))
}
func TestAccessorsAccessSetInsideArray(t *testing.T) {
current := map[string]interface{}{"names": []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}}
access(current, "names[0].first", "Mat", true, true)
access(current, "names[0].last", "Ryer", true, true)
access(current, "names[1].first", "Captain", true, true)
access(current, "names[1].last", "Underpants", true, true)
assert.Equal(t, "Mat", access(current, "names[0].first", nil, false, true))
assert.Equal(t, "Ryer", access(current, "names[0].last", nil, false, true))
assert.Equal(t, "Captain", access(current, "names[1].first", nil, false, true))
assert.Equal(t, "Underpants", access(current, "names[1].last", nil, false, true))
}
func TestAccessorsAccessSetFromArrayWithInt(t *testing.T) {
current := []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}
one := access(current, 0, nil, false, false)
two := access(current, 1, nil, false, false)
three := access(current, 2, nil, false, false)
assert.Equal(t, "Tyler", one.(map[string]interface{})["first"])
assert.Equal(t, "Capitol", two.(map[string]interface{})["first"])
assert.Nil(t, three)
}
func TestAccessorsSet(t *testing.T) {
current := New(map[string]interface{}{"name": "Tyler"})
current.Set("name", "Mat")
assert.Equal(t, "Mat", current.Get("name").data)
}

View File

@@ -1,14 +0,0 @@
case []{1}:
a := object.([]{1})
if isSet {
a[index] = value.({1})
} else {
if index >= len(a) {
if panics {
panic(fmt.Sprintf("objx: Index %d is out of range because the []{1} only contains %d items.", index, len(a)))
}
return nil
} else {
return a[index]
}
}

View File

@@ -1,86 +0,0 @@
<html>
<head>
<title>
Codegen
</title>
<style>
body {
width: 800px;
margin: auto;
}
textarea {
width: 100%;
min-height: 100px;
font-family: Courier;
}
</style>
</head>
<body>
<h2>
Template
</h2>
<p>
Use <code>{x}</code> as a placeholder for each argument.
</p>
<textarea id="template"></textarea>
<h2>
Arguments (comma separated)
</h2>
<p>
One block per line
</p>
<textarea id="args"></textarea>
<h2>
Output
</h2>
<input id="go" type="button" value="Generate code" />
<textarea id="output"></textarea>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script>
$(function(){
$("#go").click(function(){
var output = ""
var template = $("#template").val()
var args = $("#args").val()
// collect the args
var argLines = args.split("\n")
for (var line in argLines) {
var argLine = argLines[line];
var thisTemp = template
// get individual args
var args = argLine.split(",")
for (var argI in args) {
var argText = args[argI];
var argPlaceholder = "{" + argI + "}";
while (thisTemp.indexOf(argPlaceholder) > -1) {
thisTemp = thisTemp.replace(argPlaceholder, argText);
}
}
output += thisTemp
}
$("#output").val(output);
});
});
</script>
</body>
</html>

View File

@@ -1,286 +0,0 @@
/*
{4} ({1} and []{1})
--------------------------------------------------
*/
// {4} gets the value as a {1}, returns the optionalDefault
// value or a system default object if the value is the wrong type.
func (v *Value) {4}(optionalDefault ...{1}) {1} {
if s, ok := v.data.({1}); ok {
return s
}
if len(optionalDefault) == 1 {
return optionalDefault[0]
}
return {3}
}
// Must{4} gets the value as a {1}.
//
// Panics if the object is not a {1}.
func (v *Value) Must{4}() {1} {
return v.data.({1})
}
// {4}Slice gets the value as a []{1}, returns the optionalDefault
// value or nil if the value is not a []{1}.
func (v *Value) {4}Slice(optionalDefault ...[]{1}) []{1} {
if s, ok := v.data.([]{1}); ok {
return s
}
if len(optionalDefault) == 1 {
return optionalDefault[0]
}
return nil
}
// Must{4}Slice gets the value as a []{1}.
//
// Panics if the object is not a []{1}.
func (v *Value) Must{4}Slice() []{1} {
return v.data.([]{1})
}
// Is{4} gets whether the object contained is a {1} or not.
func (v *Value) Is{4}() bool {
_, ok := v.data.({1})
return ok
}
// Is{4}Slice gets whether the object contained is a []{1} or not.
func (v *Value) Is{4}Slice() bool {
_, ok := v.data.([]{1})
return ok
}
// Each{4} calls the specified callback for each object
// in the []{1}.
//
// Panics if the object is the wrong type.
func (v *Value) Each{4}(callback func(int, {1}) bool) *Value {
for index, val := range v.Must{4}Slice() {
carryon := callback(index, val)
if carryon == false {
break
}
}
return v
}
// Where{4} uses the specified decider function to select items
// from the []{1}. The object contained in the result will contain
// only the selected items.
func (v *Value) Where{4}(decider func(int, {1}) bool) *Value {
var selected []{1}
v.Each{4}(func(index int, val {1}) bool {
shouldSelect := decider(index, val)
if shouldSelect == false {
selected = append(selected, val)
}
return true
})
return &Value{data:selected}
}
// Group{4} uses the specified grouper function to group the items
// keyed by the return of the grouper. The object contained in the
// result will contain a map[string][]{1}.
func (v *Value) Group{4}(grouper func(int, {1}) string) *Value {
groups := make(map[string][]{1})
v.Each{4}(func(index int, val {1}) bool {
group := grouper(index, val)
if _, ok := groups[group]; !ok {
groups[group] = make([]{1}, 0)
}
groups[group] = append(groups[group], val)
return true
})
return &Value{data:groups}
}
// Replace{4} uses the specified function to replace each {1}s
// by iterating each item. The data in the returned result will be a
// []{1} containing the replaced items.
func (v *Value) Replace{4}(replacer func(int, {1}) {1}) *Value {
arr := v.Must{4}Slice()
replaced := make([]{1}, len(arr))
v.Each{4}(func(index int, val {1}) bool {
replaced[index] = replacer(index, val)
return true
})
return &Value{data:replaced}
}
// Collect{4} uses the specified collector function to collect a value
// for each of the {1}s in the slice. The data returned will be a
// []interface{}.
func (v *Value) Collect{4}(collector func(int, {1}) interface{}) *Value {
arr := v.Must{4}Slice()
collected := make([]interface{}, len(arr))
v.Each{4}(func(index int, val {1}) bool {
collected[index] = collector(index, val)
return true
})
return &Value{data:collected}
}
// ************************************************************
// TESTS
// ************************************************************
func Test{4}(t *testing.T) {
val := {1}( {2} )
m := map[string]interface{}{"value": val, "nothing": nil}
assert.Equal(t, val, New(m).Get("value").{4}())
assert.Equal(t, val, New(m).Get("value").Must{4}())
assert.Equal(t, {1}({3}), New(m).Get("nothing").{4}())
assert.Equal(t, val, New(m).Get("nothing").{4}({2}))
assert.Panics(t, func() {
New(m).Get("age").Must{4}()
})
}
func Test{4}Slice(t *testing.T) {
val := {1}( {2} )
m := map[string]interface{}{"value": []{1}{ val }, "nothing": nil}
assert.Equal(t, val, New(m).Get("value").{4}Slice()[0])
assert.Equal(t, val, New(m).Get("value").Must{4}Slice()[0])
assert.Equal(t, []{1}(nil), New(m).Get("nothing").{4}Slice())
assert.Equal(t, val, New(m).Get("nothing").{4}Slice( []{1}{ {1}({2}) } )[0])
assert.Panics(t, func() {
New(m).Get("nothing").Must{4}Slice()
})
}
func TestIs{4}(t *testing.T) {
var v *Value
v = &Value{data: {1}({2})}
assert.True(t, v.Is{4}())
v = &Value{data: []{1}{ {1}({2}) }}
assert.True(t, v.Is{4}Slice())
}
func TestEach{4}(t *testing.T) {
v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
count := 0
replacedVals := make([]{1}, 0)
assert.Equal(t, v, v.Each{4}(func(i int, val {1}) bool {
count++
replacedVals = append(replacedVals, val)
// abort early
if i == 2 {
return false
}
return true
}))
assert.Equal(t, count, 3)
assert.Equal(t, replacedVals[0], v.Must{4}Slice()[0])
assert.Equal(t, replacedVals[1], v.Must{4}Slice()[1])
assert.Equal(t, replacedVals[2], v.Must{4}Slice()[2])
}
func TestWhere{4}(t *testing.T) {
v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
selected := v.Where{4}(func(i int, val {1}) bool {
return i%2==0
}).Must{4}Slice()
assert.Equal(t, 3, len(selected))
}
func TestGroup{4}(t *testing.T) {
v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
grouped := v.Group{4}(func(i int, val {1}) string {
return fmt.Sprintf("%v", i%2==0)
}).data.(map[string][]{1})
assert.Equal(t, 2, len(grouped))
assert.Equal(t, 3, len(grouped["true"]))
assert.Equal(t, 3, len(grouped["false"]))
}
func TestReplace{4}(t *testing.T) {
v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
rawArr := v.Must{4}Slice()
replaced := v.Replace{4}(func(index int, val {1}) {1} {
if index < len(rawArr)-1 {
return rawArr[index+1]
}
return rawArr[0]
})
replacedArr := replaced.Must{4}Slice()
if assert.Equal(t, 6, len(replacedArr)) {
assert.Equal(t, replacedArr[0], rawArr[1])
assert.Equal(t, replacedArr[1], rawArr[2])
assert.Equal(t, replacedArr[2], rawArr[3])
assert.Equal(t, replacedArr[3], rawArr[4])
assert.Equal(t, replacedArr[4], rawArr[5])
assert.Equal(t, replacedArr[5], rawArr[0])
}
}
func TestCollect{4}(t *testing.T) {
v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
collected := v.Collect{4}(func(index int, val {1}) interface{} {
return index
})
collectedArr := collected.MustInterSlice()
if assert.Equal(t, 6, len(collectedArr)) {
assert.Equal(t, collectedArr[0], 0)
assert.Equal(t, collectedArr[1], 1)
assert.Equal(t, collectedArr[2], 2)
assert.Equal(t, collectedArr[3], 3)
assert.Equal(t, collectedArr[4], 4)
assert.Equal(t, collectedArr[5], 5)
}
}

View File

@@ -1,20 +0,0 @@
Interface,interface{},"something",nil,Inter
Map,map[string]interface{},map[string]interface{}{"name":"Tyler"},nil,MSI
ObjxMap,(Map),New(1),New(nil),ObjxMap
Bool,bool,true,false,Bool
String,string,"hello","",Str
Int,int,1,0,Int
Int8,int8,1,0,Int8
Int16,int16,1,0,Int16
Int32,int32,1,0,Int32
Int64,int64,1,0,Int64
Uint,uint,1,0,Uint
Uint8,uint8,1,0,Uint8
Uint16,uint16,1,0,Uint16
Uint32,uint32,1,0,Uint32
Uint64,uint64,1,0,Uint64
Uintptr,uintptr,1,0,Uintptr
Float32,float32,1,0,Float32
Float64,float64,1,0,Float64
Complex64,complex64,1,0,Complex64
Complex128,complex128,1,0,Complex128

View File

@@ -1,13 +0,0 @@
package objx
const (
// PathSeparator is the character used to separate the elements
// of the keypath.
//
// For example, `location.address.city`
PathSeparator string = "."
// SignatureSeparator is the character that is used to
// separate the Base64 string from the security signature.
SignatureSeparator = "_"
)

View File

@@ -1,117 +0,0 @@
package objx
import (
"bytes"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net/url"
)
// JSON converts the contained object to a JSON string
// representation
func (m Map) JSON() (string, error) {
result, err := json.Marshal(m)
if err != nil {
err = errors.New("objx: JSON encode failed with: " + err.Error())
}
return string(result), err
}
// MustJSON converts the contained object to a JSON string
// representation and panics if there is an error
func (m Map) MustJSON() string {
result, err := m.JSON()
if err != nil {
panic(err.Error())
}
return result
}
// Base64 converts the contained object to a Base64 string
// representation of the JSON string representation
func (m Map) Base64() (string, error) {
var buf bytes.Buffer
jsonData, err := m.JSON()
if err != nil {
return "", err
}
encoder := base64.NewEncoder(base64.StdEncoding, &buf)
encoder.Write([]byte(jsonData))
encoder.Close()
return buf.String(), nil
}
// MustBase64 converts the contained object to a Base64 string
// representation of the JSON string representation and panics
// if there is an error
func (m Map) MustBase64() string {
result, err := m.Base64()
if err != nil {
panic(err.Error())
}
return result
}
// SignedBase64 converts the contained object to a Base64 string
// representation of the JSON string representation and signs it
// using the provided key.
func (m Map) SignedBase64(key string) (string, error) {
base64, err := m.Base64()
if err != nil {
return "", err
}
sig := HashWithKey(base64, key)
return base64 + SignatureSeparator + sig, nil
}
// MustSignedBase64 converts the contained object to a Base64 string
// representation of the JSON string representation and signs it
// using the provided key and panics if there is an error
func (m Map) MustSignedBase64(key string) string {
result, err := m.SignedBase64(key)
if err != nil {
panic(err.Error())
}
return result
}
/*
URL Query
------------------------------------------------
*/
// URLValues creates a url.Values object from an Obj. This
// function requires that the wrapped object be a map[string]interface{}
func (m Map) URLValues() url.Values {
vals := make(url.Values)
for k, v := range m {
//TODO: can this be done without sprintf?
vals.Set(k, fmt.Sprintf("%v", v))
}
return vals
}
// URLQuery gets an encoded URL query representing the given
// Obj. This function requires that the wrapped object be a
// map[string]interface{}
func (m Map) URLQuery() (string, error) {
return m.URLValues().Encode(), nil
}

View File

@@ -1,94 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestConversionJSON(t *testing.T) {
jsonString := `{"name":"Mat"}`
o := MustFromJSON(jsonString)
result, err := o.JSON()
if assert.NoError(t, err) {
assert.Equal(t, jsonString, result)
}
assert.Equal(t, jsonString, o.MustJSON())
}
func TestConversionJSONWithError(t *testing.T) {
o := MSI()
o["test"] = func() {}
assert.Panics(t, func() {
o.MustJSON()
})
_, err := o.JSON()
assert.Error(t, err)
}
func TestConversionBase64(t *testing.T) {
o := New(map[string]interface{}{"name": "Mat"})
result, err := o.Base64()
if assert.NoError(t, err) {
assert.Equal(t, "eyJuYW1lIjoiTWF0In0=", result)
}
assert.Equal(t, "eyJuYW1lIjoiTWF0In0=", o.MustBase64())
}
func TestConversionBase64WithError(t *testing.T) {
o := MSI()
o["test"] = func() {}
assert.Panics(t, func() {
o.MustBase64()
})
_, err := o.Base64()
assert.Error(t, err)
}
func TestConversionSignedBase64(t *testing.T) {
o := New(map[string]interface{}{"name": "Mat"})
result, err := o.SignedBase64("key")
if assert.NoError(t, err) {
assert.Equal(t, "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6", result)
}
assert.Equal(t, "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6", o.MustSignedBase64("key"))
}
func TestConversionSignedBase64WithError(t *testing.T) {
o := MSI()
o["test"] = func() {}
assert.Panics(t, func() {
o.MustSignedBase64("key")
})
_, err := o.SignedBase64("key")
assert.Error(t, err)
}

View File

@@ -1,72 +0,0 @@
// objx - Go package for dealing with maps, slices, JSON and other data.
//
// Overview
//
// Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes
// a powerful `Get` method (among others) that allows you to easily and quickly get
// access to data within the map, without having to worry too much about type assertions,
// missing data, default values etc.
//
// Pattern
//
// Objx uses a preditable pattern to make access data from within `map[string]interface{}'s
// easy.
//
// Call one of the `objx.` functions to create your `objx.Map` to get going:
//
// m, err := objx.FromJSON(json)
//
// NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong,
// the rest will be optimistic and try to figure things out without panicking.
//
// Use `Get` to access the value you're interested in. You can use dot and array
// notation too:
//
// m.Get("places[0].latlng")
//
// Once you have saught the `Value` you're interested in, you can use the `Is*` methods
// to determine its type.
//
// if m.Get("code").IsStr() { /* ... */ }
//
// Or you can just assume the type, and use one of the strong type methods to
// extract the real value:
//
// m.Get("code").Int()
//
// If there's no value there (or if it's the wrong type) then a default value
// will be returned, or you can be explicit about the default value.
//
// Get("code").Int(-1)
//
// If you're dealing with a slice of data as a value, Objx provides many useful
// methods for iterating, manipulating and selecting that data. You can find out more
// by exploring the index below.
//
// Reading data
//
// A simple example of how to use Objx:
//
// // use MustFromJSON to make an objx.Map from some JSON
// m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`)
//
// // get the details
// name := m.Get("name").Str()
// age := m.Get("age").Int()
//
// // get their nickname (or use their name if they
// // don't have one)
// nickname := m.Get("nickname").Str(name)
//
// Ranging
//
// Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For
// example, to `range` the data, do what you would expect:
//
// m := objx.MustFromJSON(json)
// for key, value := range m {
//
// /* ... do your magic ... */
//
// }
package objx

View File

@@ -1,98 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
var fixtures = []struct {
// name is the name of the fixture (used for reporting
// failures)
name string
// data is the JSON data to be worked on
data string
// get is the argument(s) to pass to Get
get interface{}
// output is the expected output
output interface{}
}{
{
name: "Simple get",
data: `{"name": "Mat"}`,
get: "name",
output: "Mat",
},
{
name: "Get with dot notation",
data: `{"address": {"city": "Boulder"}}`,
get: "address.city",
output: "Boulder",
},
{
name: "Deep get with dot notation",
data: `{"one": {"two": {"three": {"four": "hello"}}}}`,
get: "one.two.three.four",
output: "hello",
},
{
name: "Get missing with dot notation",
data: `{"one": {"two": {"three": {"four": "hello"}}}}`,
get: "one.ten",
output: nil,
},
{
name: "Get with array notation",
data: `{"tags": ["one", "two", "three"]}`,
get: "tags[1]",
output: "two",
},
{
name: "Get with array and dot notation",
data: `{"types": { "tags": ["one", "two", "three"]}}`,
get: "types.tags[1]",
output: "two",
},
{
name: "Get with array and dot notation - field after array",
data: `{"tags": [{"name":"one"}, {"name":"two"}, {"name":"three"}]}`,
get: "tags[1].name",
output: "two",
},
{
name: "Complex get with array and dot notation",
data: `{"tags": [{"list": [{"one":"pizza"}]}]}`,
get: "tags[0].list[0].one",
output: "pizza",
},
{
name: "Get field from within string should be nil",
data: `{"name":"Tyler"}`,
get: "name.something",
output: nil,
},
{
name: "Get field from within string (using array accessor) should be nil",
data: `{"numbers":["one", "two", "three"]}`,
get: "numbers[0].nope",
output: nil,
},
}
func TestFixtures(t *testing.T) {
for _, fixture := range fixtures {
m := MustFromJSON(fixture.data)
// get the value
t.Logf("Running get fixture: \"%s\" (%v)", fixture.name, fixture)
value := m.Get(fixture.get.(string))
// make sure it matches
assert.Equal(t, fixture.output, value.data,
"Get fixture \"%s\" failed: %v", fixture.name, fixture,
)
}
}

View File

@@ -1,222 +0,0 @@
package objx
import (
"encoding/base64"
"encoding/json"
"errors"
"io/ioutil"
"net/url"
"strings"
)
// MSIConvertable is an interface that defines methods for converting your
// custom types to a map[string]interface{} representation.
type MSIConvertable interface {
// MSI gets a map[string]interface{} (msi) representing the
// object.
MSI() map[string]interface{}
}
// Map provides extended functionality for working with
// untyped data, in particular map[string]interface (msi).
type Map map[string]interface{}
// Value returns the internal value instance
func (m Map) Value() *Value {
return &Value{data: m}
}
// Nil represents a nil Map.
var Nil Map = New(nil)
// New creates a new Map containing the map[string]interface{} in the data argument.
// If the data argument is not a map[string]interface, New attempts to call the
// MSI() method on the MSIConvertable interface to create one.
func New(data interface{}) Map {
if _, ok := data.(map[string]interface{}); !ok {
if converter, ok := data.(MSIConvertable); ok {
data = converter.MSI()
} else {
return nil
}
}
return Map(data.(map[string]interface{}))
}
// MSI creates a map[string]interface{} and puts it inside a new Map.
//
// The arguments follow a key, value pattern.
//
// Panics
//
// Panics if any key arugment is non-string or if there are an odd number of arguments.
//
// Example
//
// To easily create Maps:
//
// m := objx.MSI("name", "Mat", "age", 29, "subobj", objx.MSI("active", true))
//
// // creates an Map equivalent to
// m := objx.New(map[string]interface{}{"name": "Mat", "age": 29, "subobj": map[string]interface{}{"active": true}})
func MSI(keyAndValuePairs ...interface{}) Map {
newMap := make(map[string]interface{})
keyAndValuePairsLen := len(keyAndValuePairs)
if keyAndValuePairsLen%2 != 0 {
panic("objx: MSI must have an even number of arguments following the 'key, value' pattern.")
}
for i := 0; i < keyAndValuePairsLen; i = i + 2 {
key := keyAndValuePairs[i]
value := keyAndValuePairs[i+1]
// make sure the key is a string
keyString, keyStringOK := key.(string)
if !keyStringOK {
panic("objx: MSI must follow 'string, interface{}' pattern. " + keyString + " is not a valid key.")
}
newMap[keyString] = value
}
return New(newMap)
}
// ****** Conversion Constructors
// MustFromJSON creates a new Map containing the data specified in the
// jsonString.
//
// Panics if the JSON is invalid.
func MustFromJSON(jsonString string) Map {
o, err := FromJSON(jsonString)
if err != nil {
panic("objx: MustFromJSON failed with error: " + err.Error())
}
return o
}
// FromJSON creates a new Map containing the data specified in the
// jsonString.
//
// Returns an error if the JSON is invalid.
func FromJSON(jsonString string) (Map, error) {
var data interface{}
err := json.Unmarshal([]byte(jsonString), &data)
if err != nil {
return Nil, err
}
return New(data), nil
}
// FromBase64 creates a new Obj containing the data specified
// in the Base64 string.
//
// The string is an encoded JSON string returned by Base64
func FromBase64(base64String string) (Map, error) {
decoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64String))
decoded, err := ioutil.ReadAll(decoder)
if err != nil {
return nil, err
}
return FromJSON(string(decoded))
}
// MustFromBase64 creates a new Obj containing the data specified
// in the Base64 string and panics if there is an error.
//
// The string is an encoded JSON string returned by Base64
func MustFromBase64(base64String string) Map {
result, err := FromBase64(base64String)
if err != nil {
panic("objx: MustFromBase64 failed with error: " + err.Error())
}
return result
}
// FromSignedBase64 creates a new Obj containing the data specified
// in the Base64 string.
//
// The string is an encoded JSON string returned by SignedBase64
func FromSignedBase64(base64String, key string) (Map, error) {
parts := strings.Split(base64String, SignatureSeparator)
if len(parts) != 2 {
return nil, errors.New("objx: Signed base64 string is malformed.")
}
sig := HashWithKey(parts[0], key)
if parts[1] != sig {
return nil, errors.New("objx: Signature for base64 data does not match.")
}
return FromBase64(parts[0])
}
// MustFromSignedBase64 creates a new Obj containing the data specified
// in the Base64 string and panics if there is an error.
//
// The string is an encoded JSON string returned by Base64
func MustFromSignedBase64(base64String, key string) Map {
result, err := FromSignedBase64(base64String, key)
if err != nil {
panic("objx: MustFromSignedBase64 failed with error: " + err.Error())
}
return result
}
// FromURLQuery generates a new Obj by parsing the specified
// query.
//
// For queries with multiple values, the first value is selected.
func FromURLQuery(query string) (Map, error) {
vals, err := url.ParseQuery(query)
if err != nil {
return nil, err
}
m := make(map[string]interface{})
for k, vals := range vals {
m[k] = vals[0]
}
return New(m), nil
}
// MustFromURLQuery generates a new Obj by parsing the specified
// query.
//
// For queries with multiple values, the first value is selected.
//
// Panics if it encounters an error
func MustFromURLQuery(query string) Map {
o, err := FromURLQuery(query)
if err != nil {
panic("objx: MustFromURLQuery failed with error: " + err.Error())
}
return o
}

View File

@@ -1,10 +0,0 @@
package objx
var TestMap map[string]interface{} = map[string]interface{}{
"name": "Tyler",
"address": map[string]interface{}{
"city": "Salt Lake City",
"state": "UT",
},
"numbers": []interface{}{"one", "two", "three", "four", "five"},
}

View File

@@ -1,147 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
type Convertable struct {
name string
}
func (c *Convertable) MSI() map[string]interface{} {
return map[string]interface{}{"name": c.name}
}
type Unconvertable struct {
name string
}
func TestMapCreation(t *testing.T) {
o := New(nil)
assert.Nil(t, o)
o = New("Tyler")
assert.Nil(t, o)
unconvertable := &Unconvertable{name: "Tyler"}
o = New(unconvertable)
assert.Nil(t, o)
convertable := &Convertable{name: "Tyler"}
o = New(convertable)
if assert.NotNil(t, convertable) {
assert.Equal(t, "Tyler", o["name"], "Tyler")
}
o = MSI()
if assert.NotNil(t, o) {
assert.NotNil(t, o)
}
o = MSI("name", "Tyler")
if assert.NotNil(t, o) {
if assert.NotNil(t, o) {
assert.Equal(t, o["name"], "Tyler")
}
}
}
func TestMapMustFromJSONWithError(t *testing.T) {
_, err := FromJSON(`"name":"Mat"}`)
assert.Error(t, err)
}
func TestMapFromJSON(t *testing.T) {
o := MustFromJSON(`{"name":"Mat"}`)
if assert.NotNil(t, o) {
if assert.NotNil(t, o) {
assert.Equal(t, "Mat", o["name"])
}
}
}
func TestMapFromJSONWithError(t *testing.T) {
var m Map
assert.Panics(t, func() {
m = MustFromJSON(`"name":"Mat"}`)
})
assert.Nil(t, m)
}
func TestMapFromBase64String(t *testing.T) {
base64String := "eyJuYW1lIjoiTWF0In0="
o, err := FromBase64(base64String)
if assert.NoError(t, err) {
assert.Equal(t, o.Get("name").Str(), "Mat")
}
assert.Equal(t, MustFromBase64(base64String).Get("name").Str(), "Mat")
}
func TestMapFromBase64StringWithError(t *testing.T) {
base64String := "eyJuYW1lIjoiTWFasd0In0="
_, err := FromBase64(base64String)
assert.Error(t, err)
assert.Panics(t, func() {
MustFromBase64(base64String)
})
}
func TestMapFromSignedBase64String(t *testing.T) {
base64String := "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6"
o, err := FromSignedBase64(base64String, "key")
if assert.NoError(t, err) {
assert.Equal(t, o.Get("name").Str(), "Mat")
}
assert.Equal(t, MustFromSignedBase64(base64String, "key").Get("name").Str(), "Mat")
}
func TestMapFromSignedBase64StringWithError(t *testing.T) {
base64String := "eyJuYW1lasdIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6"
_, err := FromSignedBase64(base64String, "key")
assert.Error(t, err)
assert.Panics(t, func() {
MustFromSignedBase64(base64String, "key")
})
}
func TestMapFromURLQuery(t *testing.T) {
m, err := FromURLQuery("name=tyler&state=UT")
if assert.NoError(t, err) && assert.NotNil(t, m) {
assert.Equal(t, "tyler", m.Get("name").Str())
assert.Equal(t, "UT", m.Get("state").Str())
}
}

View File

@@ -1,81 +0,0 @@
package objx
// Exclude returns a new Map with the keys in the specified []string
// excluded.
func (d Map) Exclude(exclude []string) Map {
excluded := make(Map)
for k, v := range d {
var shouldInclude bool = true
for _, toExclude := range exclude {
if k == toExclude {
shouldInclude = false
break
}
}
if shouldInclude {
excluded[k] = v
}
}
return excluded
}
// Copy creates a shallow copy of the Obj.
func (m Map) Copy() Map {
copied := make(map[string]interface{})
for k, v := range m {
copied[k] = v
}
return New(copied)
}
// Merge blends the specified map with a copy of this map and returns the result.
//
// Keys that appear in both will be selected from the specified map.
// This method requires that the wrapped object be a map[string]interface{}
func (m Map) Merge(merge Map) Map {
return m.Copy().MergeHere(merge)
}
// Merge blends the specified map with this map and returns the current map.
//
// Keys that appear in both will be selected from the specified map. The original map
// will be modified. This method requires that
// the wrapped object be a map[string]interface{}
func (m Map) MergeHere(merge Map) Map {
for k, v := range merge {
m[k] = v
}
return m
}
// Transform builds a new Obj giving the transformer a chance
// to change the keys and values as it goes. This method requires that
// the wrapped object be a map[string]interface{}
func (m Map) Transform(transformer func(key string, value interface{}) (string, interface{})) Map {
newMap := make(map[string]interface{})
for k, v := range m {
modifiedKey, modifiedVal := transformer(k, v)
newMap[modifiedKey] = modifiedVal
}
return New(newMap)
}
// TransformKeys builds a new map using the specified key mapping.
//
// Unspecified keys will be unaltered.
// This method requires that the wrapped object be a map[string]interface{}
func (m Map) TransformKeys(mapping map[string]string) Map {
return m.Transform(func(key string, value interface{}) (string, interface{}) {
if newKey, ok := mapping[key]; ok {
return newKey, value
}
return key, value
})
}

View File

@@ -1,77 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestExclude(t *testing.T) {
d := make(Map)
d["name"] = "Mat"
d["age"] = 29
d["secret"] = "ABC"
excluded := d.Exclude([]string{"secret"})
assert.Equal(t, d["name"], excluded["name"])
assert.Equal(t, d["age"], excluded["age"])
assert.False(t, excluded.Has("secret"), "secret should be excluded")
}
func TestCopy(t *testing.T) {
d1 := make(map[string]interface{})
d1["name"] = "Tyler"
d1["location"] = "UT"
d1Obj := New(d1)
d2Obj := d1Obj.Copy()
d2Obj["name"] = "Mat"
assert.Equal(t, d1Obj.Get("name").Str(), "Tyler")
assert.Equal(t, d2Obj.Get("name").Str(), "Mat")
}
func TestMerge(t *testing.T) {
d := make(map[string]interface{})
d["name"] = "Mat"
d1 := make(map[string]interface{})
d1["name"] = "Tyler"
d1["location"] = "UT"
dObj := New(d)
d1Obj := New(d1)
merged := dObj.Merge(d1Obj)
assert.Equal(t, merged.Get("name").Str(), d1Obj.Get("name").Str())
assert.Equal(t, merged.Get("location").Str(), d1Obj.Get("location").Str())
assert.Empty(t, dObj.Get("location").Str())
}
func TestMergeHere(t *testing.T) {
d := make(map[string]interface{})
d["name"] = "Mat"
d1 := make(map[string]interface{})
d1["name"] = "Tyler"
d1["location"] = "UT"
dObj := New(d)
d1Obj := New(d1)
merged := dObj.MergeHere(d1Obj)
assert.Equal(t, dObj, merged, "With MergeHere, it should return the first modified map")
assert.Equal(t, merged.Get("name").Str(), d1Obj.Get("name").Str())
assert.Equal(t, merged.Get("location").Str(), d1Obj.Get("location").Str())
assert.Equal(t, merged.Get("location").Str(), dObj.Get("location").Str())
}

View File

@@ -1,14 +0,0 @@
package objx
import (
"crypto/sha1"
"encoding/hex"
)
// HashWithKey hashes the specified string using the security
// key.
func HashWithKey(data, key string) string {
hash := sha1.New()
hash.Write([]byte(data + ":" + key))
return hex.EncodeToString(hash.Sum(nil))
}

View File

@@ -1,12 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestHashWithKey(t *testing.T) {
assert.Equal(t, "0ce84d8d01f2c7b6e0882b784429c54d280ea2d9", HashWithKey("abc", "def"))
}

View File

@@ -1,41 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestSimpleExample(t *testing.T) {
// build a map from a JSON object
o := MustFromJSON(`{"name":"Mat","foods":["indian","chinese"], "location":{"county":"hobbiton","city":"the shire"}}`)
// Map can be used as a straight map[string]interface{}
assert.Equal(t, o["name"], "Mat")
// Get an Value object
v := o.Get("name")
assert.Equal(t, v, &Value{data: "Mat"})
// Test the contained value
assert.False(t, v.IsInt())
assert.False(t, v.IsBool())
assert.True(t, v.IsStr())
// Get the contained value
assert.Equal(t, v.Str(), "Mat")
// Get a default value if the contained value is not of the expected type or does not exist
assert.Equal(t, 1, v.Int(1))
// Get a value by using array notation
assert.Equal(t, "indian", o.Get("foods[0]").Data())
// Set a value by using array notation
o.Set("foods[0]", "italian")
assert.Equal(t, "italian", o.Get("foods[0]").Str())
// Get a value by using dot notation
assert.Equal(t, "hobbiton", o.Get("location.county").Str())
}

View File

@@ -1,17 +0,0 @@
package objx
// Has gets whether there is something at the specified selector
// or not.
//
// If m is nil, Has will always return false.
func (m Map) Has(selector string) bool {
if m == nil {
return false
}
return !m.Get(selector).IsNil()
}
// IsNil gets whether the data is nil or not.
func (v *Value) IsNil() bool {
return v == nil || v.data == nil
}

View File

@@ -1,24 +0,0 @@
package objx
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestHas(t *testing.T) {
m := New(TestMap)
assert.True(t, m.Has("name"))
assert.True(t, m.Has("address.state"))
assert.True(t, m.Has("numbers[4]"))
assert.False(t, m.Has("address.state.nope"))
assert.False(t, m.Has("address.nope"))
assert.False(t, m.Has("nope"))
assert.False(t, m.Has("numbers[5]"))
m = nil
assert.False(t, m.Has("nothing"))
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +0,0 @@
package objx
// Value provides methods for extracting interface{} data in various
// types.
type Value struct {
// data contains the raw data being managed by this Value
data interface{}
}
// Data returns the raw data contained by this Value
func (v *Value) Data() interface{} {
return v.data
}

View File

@@ -1 +0,0 @@
package objx

View File

@@ -1,805 +0,0 @@
package assert
import (
"bufio"
"bytes"
"fmt"
"reflect"
"regexp"
"runtime"
"strings"
"time"
)
// TestingT is an interface wrapper around *testing.T
type TestingT interface {
Errorf(format string, args ...interface{})
}
// Comparison a custom function that returns true on success and false on failure
type Comparison func() (success bool)
/*
Helper functions
*/
// ObjectsAreEqual determines if two objects are considered equal.
//
// This function does no assertion of any kind.
func ObjectsAreEqual(expected, actual interface{}) bool {
if expected == nil || actual == nil {
return expected == actual
}
if reflect.DeepEqual(expected, actual) {
return true
}
// Last ditch effort
if fmt.Sprintf("%#v", expected) == fmt.Sprintf("%#v", actual) {
return true
}
return false
}
func ObjectsAreEqualValues(expected, actual interface{}) bool {
if ObjectsAreEqual(expected, actual) {
return true
}
actualType := reflect.TypeOf(actual)
expectedValue := reflect.ValueOf(expected)
if expectedValue.Type().ConvertibleTo(actualType) {
// Attempt comparison after type conversion
if reflect.DeepEqual(actual, expectedValue.Convert(actualType).Interface()) {
return true
}
}
return false
}
/* CallerInfo is necessary because the assert functions use the testing object
internally, causing it to print the file:line of the assert method, rather than where
the problem actually occured in calling code.*/
// CallerInfo returns a string containing the file and line number of the assert call
// that failed.
func CallerInfo() string {
file := ""
line := 0
ok := false
for i := 0; ; i++ {
_, file, line, ok = runtime.Caller(i)
if !ok {
return ""
}
parts := strings.Split(file, "/")
dir := parts[len(parts)-2]
file = parts[len(parts)-1]
if (dir != "assert" && dir != "mock" && dir != "require") || file == "mock_test.go" {
break
}
}
return fmt.Sprintf("%s:%d", file, line)
}
// getWhitespaceString returns a string that is long enough to overwrite the default
// output from the go testing framework.
func getWhitespaceString() string {
_, file, line, ok := runtime.Caller(1)
if !ok {
return ""
}
parts := strings.Split(file, "/")
file = parts[len(parts)-1]
return strings.Repeat(" ", len(fmt.Sprintf("%s:%d: ", file, line)))
}
func messageFromMsgAndArgs(msgAndArgs ...interface{}) string {
if len(msgAndArgs) == 0 || msgAndArgs == nil {
return ""
}
if len(msgAndArgs) == 1 {
return msgAndArgs[0].(string)
}
if len(msgAndArgs) > 1 {
return fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...)
}
return ""
}
// Indents all lines of the message by appending a number of tabs to each line, in an output format compatible with Go's
// test printing (see inner comment for specifics)
func indentMessageLines(message string, tabs int) string {
outBuf := new(bytes.Buffer)
for i, scanner := 0, bufio.NewScanner(strings.NewReader(message)); scanner.Scan(); i++ {
if i != 0 {
outBuf.WriteRune('\n')
}
for ii := 0; ii < tabs; ii++ {
outBuf.WriteRune('\t')
// Bizarrely, all lines except the first need one fewer tabs prepended, so deliberately advance the counter
// by 1 prematurely.
if ii == 0 && i > 0 {
ii++
}
}
outBuf.WriteString(scanner.Text())
}
return outBuf.String()
}
// Fail reports a failure through
func Fail(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool {
message := messageFromMsgAndArgs(msgAndArgs...)
if len(message) > 0 {
t.Errorf("\r%s\r\tLocation:\t%s\n"+
"\r\tError:%s\n"+
"\r\tMessages:\t%s\n\r",
getWhitespaceString(),
CallerInfo(),
indentMessageLines(failureMessage, 2),
message)
} else {
t.Errorf("\r%s\r\tLocation:\t%s\n"+
"\r\tError:%s\n\r",
getWhitespaceString(),
CallerInfo(),
indentMessageLines(failureMessage, 2))
}
return false
}
// Implements asserts that an object is implemented by the specified interface.
//
// assert.Implements(t, (*MyInterface)(nil), new(MyObject), "MyObject")
func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {
interfaceType := reflect.TypeOf(interfaceObject).Elem()
if !reflect.TypeOf(object).Implements(interfaceType) {
return Fail(t, fmt.Sprintf("Object must implement %v", interfaceType), msgAndArgs...)
}
return true
}
// IsType asserts that the specified objects are of the same type.
func IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool {
if !ObjectsAreEqual(reflect.TypeOf(object), reflect.TypeOf(expectedType)) {
return Fail(t, fmt.Sprintf("Object expected to be of type %v, but was %v", reflect.TypeOf(expectedType), reflect.TypeOf(object)), msgAndArgs...)
}
return true
}
// Equal asserts that two objects are equal.
//
// assert.Equal(t, 123, 123, "123 and 123 should be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if !ObjectsAreEqual(expected, actual) {
return Fail(t, fmt.Sprintf("Not equal: %#v (expected)\n"+
" != %#v (actual)", expected, actual), msgAndArgs...)
}
return true
}
// EqualValues asserts that two objects are equal or convertable to the same types
// and equal.
//
// assert.EqualValues(t, uint32(123), int32(123), "123 and 123 should be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if !ObjectsAreEqualValues(expected, actual) {
return Fail(t, fmt.Sprintf("Not equal: %#v (expected)\n"+
" != %#v (actual)", expected, actual), msgAndArgs...)
}
return true
}
// Exactly asserts that two objects are equal is value and type.
//
// assert.Exactly(t, int32(123), int64(123), "123 and 123 should NOT be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
aType := reflect.TypeOf(expected)
bType := reflect.TypeOf(actual)
if aType != bType {
return Fail(t, "Types expected to match exactly", "%v != %v", aType, bType)
}
return Equal(t, expected, actual, msgAndArgs...)
}
// NotNil asserts that the specified object is not nil.
//
// assert.NotNil(t, err, "err should be something")
//
// Returns whether the assertion was successful (true) or not (false).
func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
success := true
if object == nil {
success = false
} else {
value := reflect.ValueOf(object)
kind := value.Kind()
if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() {
success = false
}
}
if !success {
Fail(t, "Expected not to be nil.", msgAndArgs...)
}
return success
}
// isNil checks if a specified object is nil or not, without Failing.
func isNil(object interface{}) bool {
if object == nil {
return true
}
value := reflect.ValueOf(object)
kind := value.Kind()
if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() {
return true
}
return false
}
// Nil asserts that the specified object is nil.
//
// assert.Nil(t, err, "err should be nothing")
//
// Returns whether the assertion was successful (true) or not (false).
func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
if isNil(object) {
return true
}
return Fail(t, fmt.Sprintf("Expected nil, but got: %#v", object), msgAndArgs...)
}
var zeros = []interface{}{
int(0),
int8(0),
int16(0),
int32(0),
int64(0),
uint(0),
uint8(0),
uint16(0),
uint32(0),
uint64(0),
float32(0),
float64(0),
}
// isEmpty gets whether the specified object is considered empty or not.
func isEmpty(object interface{}) bool {
if object == nil {
return true
} else if object == "" {
return true
} else if object == false {
return true
}
for _, v := range zeros {
if object == v {
return true
}
}
objValue := reflect.ValueOf(object)
switch objValue.Kind() {
case reflect.Map:
fallthrough
case reflect.Slice, reflect.Chan:
{
return (objValue.Len() == 0)
}
case reflect.Ptr:
{
switch object.(type) {
case *time.Time:
return object.(*time.Time).IsZero()
default:
return false
}
}
}
return false
}
// Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
// assert.Empty(t, obj)
//
// Returns whether the assertion was successful (true) or not (false).
func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
pass := isEmpty(object)
if !pass {
Fail(t, fmt.Sprintf("Should be empty, but was %v", object), msgAndArgs...)
}
return pass
}
// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
// if assert.NotEmpty(t, obj) {
// assert.Equal(t, "two", obj[1])
// }
//
// Returns whether the assertion was successful (true) or not (false).
func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
pass := !isEmpty(object)
if !pass {
Fail(t, fmt.Sprintf("Should NOT be empty, but was %v", object), msgAndArgs...)
}
return pass
}
// getLen try to get length of object.
// return (false, 0) if impossible.
func getLen(x interface{}) (ok bool, length int) {
v := reflect.ValueOf(x)
defer func() {
if e := recover(); e != nil {
ok = false
}
}()
return true, v.Len()
}
// Len asserts that the specified object has specific length.
// Len also fails if the object has a type that len() not accept.
//
// assert.Len(t, mySlice, 3, "The size of slice is not 3")
//
// Returns whether the assertion was successful (true) or not (false).
func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool {
ok, l := getLen(object)
if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", object), msgAndArgs...)
}
if l != length {
return Fail(t, fmt.Sprintf("\"%s\" should have %d item(s), but has %d", object, length, l), msgAndArgs...)
}
return true
}
// True asserts that the specified value is true.
//
// assert.True(t, myBool, "myBool should be true")
//
// Returns whether the assertion was successful (true) or not (false).
func True(t TestingT, value bool, msgAndArgs ...interface{}) bool {
if value != true {
return Fail(t, "Should be true", msgAndArgs...)
}
return true
}
// False asserts that the specified value is true.
//
// assert.False(t, myBool, "myBool should be false")
//
// Returns whether the assertion was successful (true) or not (false).
func False(t TestingT, value bool, msgAndArgs ...interface{}) bool {
if value != false {
return Fail(t, "Should be false", msgAndArgs...)
}
return true
}
// NotEqual asserts that the specified values are NOT equal.
//
// assert.NotEqual(t, obj1, obj2, "two objects shouldn't be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if ObjectsAreEqual(expected, actual) {
return Fail(t, "Should not be equal", msgAndArgs...)
}
return true
}
// containsElement try loop over the list check if the list includes the element.
// return (false, false) if impossible.
// return (true, false) if element was not found.
// return (true, true) if element was found.
func includeElement(list interface{}, element interface{}) (ok, found bool) {
listValue := reflect.ValueOf(list)
elementValue := reflect.ValueOf(element)
defer func() {
if e := recover(); e != nil {
ok = false
found = false
}
}()
if reflect.TypeOf(list).Kind() == reflect.String {
return true, strings.Contains(listValue.String(), elementValue.String())
}
for i := 0; i < listValue.Len(); i++ {
if ObjectsAreEqual(listValue.Index(i).Interface(), element) {
return true, true
}
}
return true, false
}
// Contains asserts that the specified string or list(array, slice...) contains the
// specified substring or element.
//
// assert.Contains(t, "Hello World", "World", "But 'Hello World' does contain 'World'")
// assert.Contains(t, ["Hello", "World"], "World", "But ["Hello", "World"] does contain 'World'")
//
// Returns whether the assertion was successful (true) or not (false).
func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {
ok, found := includeElement(s, contains)
if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", s), msgAndArgs...)
}
if !found {
return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", s, contains), msgAndArgs...)
}
return true
}
// NotContains asserts that the specified string or list(array, slice...) does NOT contain the
// specified substring or element.
//
// assert.NotContains(t, "Hello World", "Earth", "But 'Hello World' does NOT contain 'Earth'")
// assert.NotContains(t, ["Hello", "World"], "Earth", "But ['Hello', 'World'] does NOT contain 'Earth'")
//
// Returns whether the assertion was successful (true) or not (false).
func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {
ok, found := includeElement(s, contains)
if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", s), msgAndArgs...)
}
if found {
return Fail(t, fmt.Sprintf("\"%s\" should not contain \"%s\"", s, contains), msgAndArgs...)
}
return true
}
// Condition uses a Comparison to assert a complex condition.
func Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool {
result := comp()
if !result {
Fail(t, "Condition failed!", msgAndArgs...)
}
return result
}
// PanicTestFunc defines a func that should be passed to the assert.Panics and assert.NotPanics
// methods, and represents a simple func that takes no arguments, and returns nothing.
type PanicTestFunc func()
// didPanic returns true if the function passed to it panics. Otherwise, it returns false.
func didPanic(f PanicTestFunc) (bool, interface{}) {
didPanic := false
var message interface{}
func() {
defer func() {
if message = recover(); message != nil {
didPanic = true
}
}()
// call the target function
f()
}()
return didPanic, message
}
// Panics asserts that the code inside the specified PanicTestFunc panics.
//
// assert.Panics(t, func(){
// GoCrazy()
// }, "Calling GoCrazy() should panic")
//
// Returns whether the assertion was successful (true) or not (false).
func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if funcDidPanic, panicValue := didPanic(f); !funcDidPanic {
return Fail(t, fmt.Sprintf("func %#v should panic\n\r\tPanic value:\t%v", f, panicValue), msgAndArgs...)
}
return true
}
// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.
//
// assert.NotPanics(t, func(){
// RemainCalm()
// }, "Calling RemainCalm() should NOT panic")
//
// Returns whether the assertion was successful (true) or not (false).
func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if funcDidPanic, panicValue := didPanic(f); funcDidPanic {
return Fail(t, fmt.Sprintf("func %#v should not panic\n\r\tPanic value:\t%v", f, panicValue), msgAndArgs...)
}
return true
}
// WithinDuration asserts that the two times are within duration delta of each other.
//
// assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second, "The difference should not be more than 10s")
//
// Returns whether the assertion was successful (true) or not (false).
func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {
dt := expected.Sub(actual)
if dt < -delta || dt > delta {
return Fail(t, fmt.Sprintf("Max difference between %v and %v allowed is %v, but difference was %v", expected, actual, delta, dt), msgAndArgs...)
}
return true
}
func toFloat(x interface{}) (float64, bool) {
var xf float64
xok := true
switch xn := x.(type) {
case uint8:
xf = float64(xn)
case uint16:
xf = float64(xn)
case uint32:
xf = float64(xn)
case uint64:
xf = float64(xn)
case int:
xf = float64(xn)
case int8:
xf = float64(xn)
case int16:
xf = float64(xn)
case int32:
xf = float64(xn)
case int64:
xf = float64(xn)
case float32:
xf = float64(xn)
case float64:
xf = float64(xn)
default:
xok = false
}
return xf, xok
}
// InDelta asserts that the two numerals are within delta of each other.
//
// assert.InDelta(t, math.Pi, (22 / 7.0), 0.01)
//
// Returns whether the assertion was successful (true) or not (false).
func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {
af, aok := toFloat(expected)
bf, bok := toFloat(actual)
if !aok || !bok {
return Fail(t, fmt.Sprintf("Parameters must be numerical"), msgAndArgs...)
}
dt := af - bf
if dt < -delta || dt > delta {
return Fail(t, fmt.Sprintf("Max difference between %v and %v allowed is %v, but difference was %v", expected, actual, delta, dt), msgAndArgs...)
}
return true
}
// min(|expected|, |actual|) * epsilon
func calcEpsilonDelta(expected, actual interface{}, epsilon float64) float64 {
af, aok := toFloat(expected)
bf, bok := toFloat(actual)
if !aok || !bok {
// invalid input
return 0
}
if af < 0 {
af = -af
}
if bf < 0 {
bf = -bf
}
var delta float64
if af < bf {
delta = af * epsilon
} else {
delta = bf * epsilon
}
return delta
}
// InEpsilon asserts that expected and actual have a relative error less than epsilon
//
// Returns whether the assertion was successful (true) or not (false).
func InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {
delta := calcEpsilonDelta(expected, actual, epsilon)
return InDelta(t, expected, actual, delta, msgAndArgs...)
}
/*
Errors
*/
// NoError asserts that a function returned no error (i.e. `nil`).
//
// actualObj, err := SomeFunction()
// if assert.NoError(t, err) {
// assert.Equal(t, actualObj, expectedObj)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool {
if isNil(err) {
return true
}
return Fail(t, fmt.Sprintf("No error is expected but got %v", err), msgAndArgs...)
}
// Error asserts that a function returned an error (i.e. not `nil`).
//
// actualObj, err := SomeFunction()
// if assert.Error(t, err, "An error was expected") {
// assert.Equal(t, err, expectedError)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func Error(t TestingT, err error, msgAndArgs ...interface{}) bool {
message := messageFromMsgAndArgs(msgAndArgs...)
return NotNil(t, err, "An error is expected but got nil. %s", message)
}
// EqualError asserts that a function returned an error (i.e. not `nil`)
// and that it is equal to the provided error.
//
// actualObj, err := SomeFunction()
// if assert.Error(t, err, "An error was expected") {
// assert.Equal(t, err, expectedError)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool {
message := messageFromMsgAndArgs(msgAndArgs...)
if !NotNil(t, theError, "An error is expected but got nil. %s", message) {
return false
}
s := "An error with value \"%s\" is expected but got \"%s\". %s"
return Equal(t, theError.Error(), errString,
s, errString, theError.Error(), message)
}
// matchRegexp return true if a specified regexp matches a string.
func matchRegexp(rx interface{}, str interface{}) bool {
var r *regexp.Regexp
if rr, ok := rx.(*regexp.Regexp); ok {
r = rr
} else {
r = regexp.MustCompile(fmt.Sprint(rx))
}
return (r.FindStringIndex(fmt.Sprint(str)) != nil)
}
// Regexp asserts that a specified regexp matches a string.
//
// assert.Regexp(t, regexp.MustCompile("start"), "it's starting")
// assert.Regexp(t, "start...$", "it's not starting")
//
// Returns whether the assertion was successful (true) or not (false).
func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
match := matchRegexp(rx, str)
if !match {
Fail(t, fmt.Sprintf("Expect \"%v\" to match \"%v\"", str, rx), msgAndArgs...)
}
return match
}
// NotRegexp asserts that a specified regexp does not match a string.
//
// assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting")
// assert.NotRegexp(t, "^start", "it's not starting")
//
// Returns whether the assertion was successful (true) or not (false).
func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
match := matchRegexp(rx, str)
if match {
Fail(t, fmt.Sprintf("Expect \"%v\" to NOT match \"%v\"", str, rx), msgAndArgs...)
}
return !match
}

View File

@@ -1,768 +0,0 @@
package assert
import (
"errors"
"regexp"
"testing"
"time"
)
// AssertionTesterInterface defines an interface to be used for testing assertion methods
type AssertionTesterInterface interface {
TestMethod()
}
// AssertionTesterConformingObject is an object that conforms to the AssertionTesterInterface interface
type AssertionTesterConformingObject struct {
}
func (a *AssertionTesterConformingObject) TestMethod() {
}
// AssertionTesterNonConformingObject is an object that does not conform to the AssertionTesterInterface interface
type AssertionTesterNonConformingObject struct {
}
func TestObjectsAreEqual(t *testing.T) {
if !ObjectsAreEqual("Hello World", "Hello World") {
t.Error("objectsAreEqual should return true")
}
if !ObjectsAreEqual(123, 123) {
t.Error("objectsAreEqual should return true")
}
if !ObjectsAreEqual(123.5, 123.5) {
t.Error("objectsAreEqual should return true")
}
if !ObjectsAreEqual([]byte("Hello World"), []byte("Hello World")) {
t.Error("objectsAreEqual should return true")
}
if !ObjectsAreEqual(nil, nil) {
t.Error("objectsAreEqual should return true")
}
if ObjectsAreEqual(map[int]int{5: 10}, map[int]int{10: 20}) {
t.Error("objectsAreEqual should return false")
}
if ObjectsAreEqual('x', "x") {
t.Error("objectsAreEqual should return false")
}
if ObjectsAreEqual("x", 'x') {
t.Error("objectsAreEqual should return false")
}
if ObjectsAreEqual(0, 0.1) {
t.Error("objectsAreEqual should return false")
}
if ObjectsAreEqual(0.1, 0) {
t.Error("objectsAreEqual should return false")
}
if ObjectsAreEqual(uint32(10), int32(10)) {
t.Error("objectsAreEqual should return false")
}
if !ObjectsAreEqualValues(uint32(10), int32(10)) {
t.Error("ObjectsAreEqualValues should return true")
}
}
func TestImplements(t *testing.T) {
mockT := new(testing.T)
if !Implements(mockT, (*AssertionTesterInterface)(nil), new(AssertionTesterConformingObject)) {
t.Error("Implements method should return true: AssertionTesterConformingObject implements AssertionTesterInterface")
}
if Implements(mockT, (*AssertionTesterInterface)(nil), new(AssertionTesterNonConformingObject)) {
t.Error("Implements method should return false: AssertionTesterNonConformingObject does not implements AssertionTesterInterface")
}
}
func TestIsType(t *testing.T) {
mockT := new(testing.T)
if !IsType(mockT, new(AssertionTesterConformingObject), new(AssertionTesterConformingObject)) {
t.Error("IsType should return true: AssertionTesterConformingObject is the same type as AssertionTesterConformingObject")
}
if IsType(mockT, new(AssertionTesterConformingObject), new(AssertionTesterNonConformingObject)) {
t.Error("IsType should return false: AssertionTesterConformingObject is not the same type as AssertionTesterNonConformingObject")
}
}
func TestEqual(t *testing.T) {
mockT := new(testing.T)
if !Equal(mockT, "Hello World", "Hello World") {
t.Error("Equal should return true")
}
if !Equal(mockT, 123, 123) {
t.Error("Equal should return true")
}
if !Equal(mockT, 123.5, 123.5) {
t.Error("Equal should return true")
}
if !Equal(mockT, []byte("Hello World"), []byte("Hello World")) {
t.Error("Equal should return true")
}
if !Equal(mockT, nil, nil) {
t.Error("Equal should return true")
}
if !Equal(mockT, int32(123), int32(123)) {
t.Error("Equal should return true")
}
if !Equal(mockT, uint64(123), uint64(123)) {
t.Error("Equal should return true")
}
funcA := func() int { return 42 }
if !Equal(mockT, funcA, funcA) {
t.Error("Equal should return true")
}
}
func TestNotNil(t *testing.T) {
mockT := new(testing.T)
if !NotNil(mockT, new(AssertionTesterConformingObject)) {
t.Error("NotNil should return true: object is not nil")
}
if NotNil(mockT, nil) {
t.Error("NotNil should return false: object is nil")
}
}
func TestNil(t *testing.T) {
mockT := new(testing.T)
if !Nil(mockT, nil) {
t.Error("Nil should return true: object is nil")
}
if Nil(mockT, new(AssertionTesterConformingObject)) {
t.Error("Nil should return false: object is not nil")
}
}
func TestTrue(t *testing.T) {
mockT := new(testing.T)
if !True(mockT, true) {
t.Error("True should return true")
}
if True(mockT, false) {
t.Error("True should return false")
}
}
func TestFalse(t *testing.T) {
mockT := new(testing.T)
if !False(mockT, false) {
t.Error("False should return true")
}
if False(mockT, true) {
t.Error("False should return false")
}
}
func TestExactly(t *testing.T) {
mockT := new(testing.T)
a := float32(1)
b := float64(1)
c := float32(1)
d := float32(2)
if Exactly(mockT, a, b) {
t.Error("Exactly should return false")
}
if Exactly(mockT, a, d) {
t.Error("Exactly should return false")
}
if !Exactly(mockT, a, c) {
t.Error("Exactly should return true")
}
if Exactly(mockT, nil, a) {
t.Error("Exactly should return false")
}
if Exactly(mockT, a, nil) {
t.Error("Exactly should return false")
}
}
func TestNotEqual(t *testing.T) {
mockT := new(testing.T)
if !NotEqual(mockT, "Hello World", "Hello World!") {
t.Error("NotEqual should return true")
}
if !NotEqual(mockT, 123, 1234) {
t.Error("NotEqual should return true")
}
if !NotEqual(mockT, 123.5, 123.55) {
t.Error("NotEqual should return true")
}
if !NotEqual(mockT, []byte("Hello World"), []byte("Hello World!")) {
t.Error("NotEqual should return true")
}
if !NotEqual(mockT, nil, new(AssertionTesterConformingObject)) {
t.Error("NotEqual should return true")
}
funcA := func() int { return 23 }
funcB := func() int { return 42 }
if !NotEqual(mockT, funcA, funcB) {
t.Error("NotEqual should return true")
}
if NotEqual(mockT, "Hello World", "Hello World") {
t.Error("NotEqual should return false")
}
if NotEqual(mockT, 123, 123) {
t.Error("NotEqual should return false")
}
if NotEqual(mockT, 123.5, 123.5) {
t.Error("NotEqual should return false")
}
if NotEqual(mockT, []byte("Hello World"), []byte("Hello World")) {
t.Error("NotEqual should return false")
}
if NotEqual(mockT, new(AssertionTesterConformingObject), new(AssertionTesterConformingObject)) {
t.Error("NotEqual should return false")
}
}
type A struct {
Name, Value string
}
func TestContains(t *testing.T) {
mockT := new(testing.T)
list := []string{"Foo", "Bar"}
complexList := []*A{
{"b", "c"},
{"d", "e"},
{"g", "h"},
{"j", "k"},
}
if !Contains(mockT, "Hello World", "Hello") {
t.Error("Contains should return true: \"Hello World\" contains \"Hello\"")
}
if Contains(mockT, "Hello World", "Salut") {
t.Error("Contains should return false: \"Hello World\" does not contain \"Salut\"")
}
if !Contains(mockT, list, "Bar") {
t.Error("Contains should return true: \"[\"Foo\", \"Bar\"]\" contains \"Bar\"")
}
if Contains(mockT, list, "Salut") {
t.Error("Contains should return false: \"[\"Foo\", \"Bar\"]\" does not contain \"Salut\"")
}
if !Contains(mockT, complexList, &A{"g", "h"}) {
t.Error("Contains should return true: complexList contains {\"g\", \"h\"}")
}
if Contains(mockT, complexList, &A{"g", "e"}) {
t.Error("Contains should return false: complexList contains {\"g\", \"e\"}")
}
}
func TestNotContains(t *testing.T) {
mockT := new(testing.T)
list := []string{"Foo", "Bar"}
if !NotContains(mockT, "Hello World", "Hello!") {
t.Error("NotContains should return true: \"Hello World\" does not contain \"Hello!\"")
}
if NotContains(mockT, "Hello World", "Hello") {
t.Error("NotContains should return false: \"Hello World\" contains \"Hello\"")
}
if !NotContains(mockT, list, "Foo!") {
t.Error("NotContains should return true: \"[\"Foo\", \"Bar\"]\" does not contain \"Foo!\"")
}
if NotContains(mockT, list, "Foo") {
t.Error("NotContains should return false: \"[\"Foo\", \"Bar\"]\" contains \"Foo\"")
}
}
func Test_includeElement(t *testing.T) {
list1 := []string{"Foo", "Bar"}
list2 := []int{1, 2}
ok, found := includeElement("Hello World", "World")
True(t, ok)
True(t, found)
ok, found = includeElement(list1, "Foo")
True(t, ok)
True(t, found)
ok, found = includeElement(list1, "Bar")
True(t, ok)
True(t, found)
ok, found = includeElement(list2, 1)
True(t, ok)
True(t, found)
ok, found = includeElement(list2, 2)
True(t, ok)
True(t, found)
ok, found = includeElement(list1, "Foo!")
True(t, ok)
False(t, found)
ok, found = includeElement(list2, 3)
True(t, ok)
False(t, found)
ok, found = includeElement(list2, "1")
True(t, ok)
False(t, found)
ok, found = includeElement(1433, "1")
False(t, ok)
False(t, found)
}
func TestCondition(t *testing.T) {
mockT := new(testing.T)
if !Condition(mockT, func() bool { return true }, "Truth") {
t.Error("Condition should return true")
}
if Condition(mockT, func() bool { return false }, "Lie") {
t.Error("Condition should return false")
}
}
func TestDidPanic(t *testing.T) {
if funcDidPanic, _ := didPanic(func() {
panic("Panic!")
}); !funcDidPanic {
t.Error("didPanic should return true")
}
if funcDidPanic, _ := didPanic(func() {
}); funcDidPanic {
t.Error("didPanic should return false")
}
}
func TestPanics(t *testing.T) {
mockT := new(testing.T)
if !Panics(mockT, func() {
panic("Panic!")
}) {
t.Error("Panics should return true")
}
if Panics(mockT, func() {
}) {
t.Error("Panics should return false")
}
}
func TestNotPanics(t *testing.T) {
mockT := new(testing.T)
if !NotPanics(mockT, func() {
}) {
t.Error("NotPanics should return true")
}
if NotPanics(mockT, func() {
panic("Panic!")
}) {
t.Error("NotPanics should return false")
}
}
func TestEqual_Funcs(t *testing.T) {
type f func() int
f1 := func() int { return 1 }
f2 := func() int { return 2 }
f1Copy := f1
Equal(t, f1Copy, f1, "Funcs are the same and should be considered equal")
NotEqual(t, f1, f2, "f1 and f2 are different")
}
func TestNoError(t *testing.T) {
mockT := new(testing.T)
// start with a nil error
var err error
True(t, NoError(mockT, err), "NoError should return True for nil arg")
// now set an error
err = errors.New("some error")
False(t, NoError(mockT, err), "NoError with error should return False")
}
func TestError(t *testing.T) {
mockT := new(testing.T)
// start with a nil error
var err error
False(t, Error(mockT, err), "Error should return False for nil arg")
// now set an error
err = errors.New("some error")
True(t, Error(mockT, err), "Error with error should return True")
}
func TestEqualError(t *testing.T) {
mockT := new(testing.T)
// start with a nil error
var err error
False(t, EqualError(mockT, err, ""),
"EqualError should return false for nil arg")
// now set an error
err = errors.New("some error")
False(t, EqualError(mockT, err, "Not some error"),
"EqualError should return false for different error string")
True(t, EqualError(mockT, err, "some error"),
"EqualError should return true")
}
func Test_isEmpty(t *testing.T) {
chWithValue := make(chan struct{}, 1)
chWithValue <- struct{}{}
True(t, isEmpty(""))
True(t, isEmpty(nil))
True(t, isEmpty([]string{}))
True(t, isEmpty(0))
True(t, isEmpty(int32(0)))
True(t, isEmpty(int64(0)))
True(t, isEmpty(false))
True(t, isEmpty(map[string]string{}))
True(t, isEmpty(new(time.Time)))
True(t, isEmpty(make(chan struct{})))
False(t, isEmpty("something"))
False(t, isEmpty(errors.New("something")))
False(t, isEmpty([]string{"something"}))
False(t, isEmpty(1))
False(t, isEmpty(true))
False(t, isEmpty(map[string]string{"Hello": "World"}))
False(t, isEmpty(chWithValue))
}
func TestEmpty(t *testing.T) {
mockT := new(testing.T)
chWithValue := make(chan struct{}, 1)
chWithValue <- struct{}{}
True(t, Empty(mockT, ""), "Empty string is empty")
True(t, Empty(mockT, nil), "Nil is empty")
True(t, Empty(mockT, []string{}), "Empty string array is empty")
True(t, Empty(mockT, 0), "Zero int value is empty")
True(t, Empty(mockT, false), "False value is empty")
True(t, Empty(mockT, make(chan struct{})), "Channel without values is empty")
False(t, Empty(mockT, "something"), "Non Empty string is not empty")
False(t, Empty(mockT, errors.New("something")), "Non nil object is not empty")
False(t, Empty(mockT, []string{"something"}), "Non empty string array is not empty")
False(t, Empty(mockT, 1), "Non-zero int value is not empty")
False(t, Empty(mockT, true), "True value is not empty")
False(t, Empty(mockT, chWithValue), "Channel with values is not empty")
}
func TestNotEmpty(t *testing.T) {
mockT := new(testing.T)
chWithValue := make(chan struct{}, 1)
chWithValue <- struct{}{}
False(t, NotEmpty(mockT, ""), "Empty string is empty")
False(t, NotEmpty(mockT, nil), "Nil is empty")
False(t, NotEmpty(mockT, []string{}), "Empty string array is empty")
False(t, NotEmpty(mockT, 0), "Zero int value is empty")
False(t, NotEmpty(mockT, false), "False value is empty")
False(t, NotEmpty(mockT, make(chan struct{})), "Channel without values is empty")
True(t, NotEmpty(mockT, "something"), "Non Empty string is not empty")
True(t, NotEmpty(mockT, errors.New("something")), "Non nil object is not empty")
True(t, NotEmpty(mockT, []string{"something"}), "Non empty string array is not empty")
True(t, NotEmpty(mockT, 1), "Non-zero int value is not empty")
True(t, NotEmpty(mockT, true), "True value is not empty")
True(t, NotEmpty(mockT, chWithValue), "Channel with values is not empty")
}
func Test_getLen(t *testing.T) {
falseCases := []interface{}{
nil,
0,
true,
false,
'A',
struct{}{},
}
for _, v := range falseCases {
ok, l := getLen(v)
False(t, ok, "Expected getLen fail to get length of %#v", v)
Equal(t, 0, l, "getLen should return 0 for %#v", v)
}
ch := make(chan int, 5)
ch <- 1
ch <- 2
ch <- 3
trueCases := []struct {
v interface{}
l int
}{
{[]int{1, 2, 3}, 3},
{[...]int{1, 2, 3}, 3},
{"ABC", 3},
{map[int]int{1: 2, 2: 4, 3: 6}, 3},
{ch, 3},
{[]int{}, 0},
{map[int]int{}, 0},
{make(chan int), 0},
{[]int(nil), 0},
{map[int]int(nil), 0},
{(chan int)(nil), 0},
}
for _, c := range trueCases {
ok, l := getLen(c.v)
True(t, ok, "Expected getLen success to get length of %#v", c.v)
Equal(t, c.l, l)
}
}
func TestLen(t *testing.T) {
mockT := new(testing.T)
False(t, Len(mockT, nil, 0), "nil does not have length")
False(t, Len(mockT, 0, 0), "int does not have length")
False(t, Len(mockT, true, 0), "true does not have length")
False(t, Len(mockT, false, 0), "false does not have length")
False(t, Len(mockT, 'A', 0), "Rune does not have length")
False(t, Len(mockT, struct{}{}, 0), "Struct does not have length")
ch := make(chan int, 5)
ch <- 1
ch <- 2
ch <- 3
cases := []struct {
v interface{}
l int
}{
{[]int{1, 2, 3}, 3},
{[...]int{1, 2, 3}, 3},
{"ABC", 3},
{map[int]int{1: 2, 2: 4, 3: 6}, 3},
{ch, 3},
{[]int{}, 0},
{map[int]int{}, 0},
{make(chan int), 0},
{[]int(nil), 0},
{map[int]int(nil), 0},
{(chan int)(nil), 0},
}
for _, c := range cases {
True(t, Len(mockT, c.v, c.l), "%#v have %d items", c.v, c.l)
}
cases = []struct {
v interface{}
l int
}{
{[]int{1, 2, 3}, 4},
{[...]int{1, 2, 3}, 2},
{"ABC", 2},
{map[int]int{1: 2, 2: 4, 3: 6}, 4},
{ch, 2},
{[]int{}, 1},
{map[int]int{}, 1},
{make(chan int), 1},
{[]int(nil), 1},
{map[int]int(nil), 1},
{(chan int)(nil), 1},
}
for _, c := range cases {
False(t, Len(mockT, c.v, c.l), "%#v have %d items", c.v, c.l)
}
}
func TestWithinDuration(t *testing.T) {
mockT := new(testing.T)
a := time.Now()
b := a.Add(10 * time.Second)
True(t, WithinDuration(mockT, a, b, 10*time.Second), "A 10s difference is within a 10s time difference")
True(t, WithinDuration(mockT, b, a, 10*time.Second), "A 10s difference is within a 10s time difference")
False(t, WithinDuration(mockT, a, b, 9*time.Second), "A 10s difference is not within a 9s time difference")
False(t, WithinDuration(mockT, b, a, 9*time.Second), "A 10s difference is not within a 9s time difference")
False(t, WithinDuration(mockT, a, b, -9*time.Second), "A 10s difference is not within a 9s time difference")
False(t, WithinDuration(mockT, b, a, -9*time.Second), "A 10s difference is not within a 9s time difference")
False(t, WithinDuration(mockT, a, b, -11*time.Second), "A 10s difference is not within a 9s time difference")
False(t, WithinDuration(mockT, b, a, -11*time.Second), "A 10s difference is not within a 9s time difference")
}
func TestInDelta(t *testing.T) {
mockT := new(testing.T)
True(t, InDelta(mockT, 1.001, 1, 0.01), "|1.001 - 1| <= 0.01")
True(t, InDelta(mockT, 1, 1.001, 0.01), "|1 - 1.001| <= 0.01")
True(t, InDelta(mockT, 1, 2, 1), "|1 - 2| <= 1")
False(t, InDelta(mockT, 1, 2, 0.5), "Expected |1 - 2| <= 0.5 to fail")
False(t, InDelta(mockT, 2, 1, 0.5), "Expected |2 - 1| <= 0.5 to fail")
False(t, InDelta(mockT, "", nil, 1), "Expected non numerals to fail")
cases := []struct {
a, b interface{}
delta float64
}{
{uint8(2), uint8(1), 1},
{uint16(2), uint16(1), 1},
{uint32(2), uint32(1), 1},
{uint64(2), uint64(1), 1},
{int(2), int(1), 1},
{int8(2), int8(1), 1},
{int16(2), int16(1), 1},
{int32(2), int32(1), 1},
{int64(2), int64(1), 1},
{float32(2), float32(1), 1},
{float64(2), float64(1), 1},
}
for _, tc := range cases {
True(t, InDelta(mockT, tc.a, tc.b, tc.delta), "Expected |%V - %V| <= %v", tc.a, tc.b, tc.delta)
}
}
func TestInEpsilon(t *testing.T) {
mockT := new(testing.T)
cases := []struct {
a, b interface{}
epsilon float64
}{
{uint8(2), uint16(2), .001},
{2.1, 2.2, 0.1},
{2.2, 2.1, 0.1},
{-2.1, -2.2, 0.1},
{-2.2, -2.1, 0.1},
{uint64(100), uint8(101), 0.01},
{0.1, -0.1, 2},
}
for _, tc := range cases {
True(t, InEpsilon(mockT, tc.a, tc.b, tc.epsilon, "Expected %V and %V to have a relative difference of %v", tc.a, tc.b, tc.epsilon))
}
cases = []struct {
a, b interface{}
epsilon float64
}{
{uint8(2), int16(-2), .001},
{uint64(100), uint8(102), 0.01},
{2.1, 2.2, 0.001},
{2.2, 2.1, 0.001},
{2.1, -2.2, 1},
{2.1, "bla-bla", 0},
{0.1, -0.1, 1.99},
}
for _, tc := range cases {
False(t, InEpsilon(mockT, tc.a, tc.b, tc.epsilon, "Expected %V and %V to have a relative difference of %v", tc.a, tc.b, tc.epsilon))
}
}
func TestRegexp(t *testing.T) {
mockT := new(testing.T)
cases := []struct {
rx, str string
}{
{"^start", "start of the line"},
{"end$", "in the end"},
{"[0-9]{3}[.-]?[0-9]{2}[.-]?[0-9]{2}", "My phone number is 650.12.34"},
}
for _, tc := range cases {
True(t, Regexp(mockT, tc.rx, tc.str))
True(t, Regexp(mockT, regexp.MustCompile(tc.rx), tc.str))
False(t, NotRegexp(mockT, tc.rx, tc.str))
False(t, NotRegexp(mockT, regexp.MustCompile(tc.rx), tc.str))
}
cases = []struct {
rx, str string
}{
{"^asdfastart", "Not the start of the line"},
{"end$", "in the end."},
{"[0-9]{3}[.-]?[0-9]{2}[.-]?[0-9]{2}", "My phone number is 650.12a.34"},
}
for _, tc := range cases {
False(t, Regexp(mockT, tc.rx, tc.str), "Expected \"%s\" to not match \"%s\"", tc.rx, tc.str)
False(t, Regexp(mockT, regexp.MustCompile(tc.rx), tc.str))
True(t, NotRegexp(mockT, tc.rx, tc.str))
True(t, NotRegexp(mockT, regexp.MustCompile(tc.rx), tc.str))
}
}

View File

@@ -1,150 +0,0 @@
// A set of comprehensive testing tools for use with the normal Go testing system.
//
// Example Usage
//
// The following is a complete example using assert in a standard test function:
// import (
// "testing"
// "github.com/stretchr/testify/assert"
// )
//
// func TestSomething(t *testing.T) {
//
// var a string = "Hello"
// var b string = "Hello"
//
// assert.Equal(t, a, b, "The two words should be the same.")
//
// }
//
// if you assert many times, use the below:
//
// import (
// "testing"
// "github.com/stretchr/testify/assert"
// )
//
// func TestSomething(t *testing.T) {
// assert := assert.New(t)
//
// var a string = "Hello"
// var b string = "Hello"
//
// assert.Equal(a, b, "The two words should be the same.")
// }
//
// Assertions
//
// Assertions allow you to easily write test code, and are global funcs in the `assert` package.
// All assertion functions take, as the first argument, the `*testing.T` object provided by the
// testing framework. This allows the assertion funcs to write the failings and other details to
// the correct place.
//
// Every assertion function also takes an optional string message as the final argument,
// allowing custom error messages to be appended to the message the assertion method outputs.
//
// Here is an overview of the assert functions:
//
// assert.Equal(t, expected, actual [, message [, format-args])
//
// assert.NotEqual(t, notExpected, actual [, message [, format-args]])
//
// assert.True(t, actualBool [, message [, format-args]])
//
// assert.False(t, actualBool [, message [, format-args]])
//
// assert.Nil(t, actualObject [, message [, format-args]])
//
// assert.NotNil(t, actualObject [, message [, format-args]])
//
// assert.Empty(t, actualObject [, message [, format-args]])
//
// assert.NotEmpty(t, actualObject [, message [, format-args]])
//
// assert.Len(t, actualObject, expectedLength, [, message [, format-args]])
//
// assert.Error(t, errorObject [, message [, format-args]])
//
// assert.NoError(t, errorObject [, message [, format-args]])
//
// assert.EqualError(t, theError, errString [, message [, format-args]])
//
// assert.Implements(t, (*MyInterface)(nil), new(MyObject) [,message [, format-args]])
//
// assert.IsType(t, expectedObject, actualObject [, message [, format-args]])
//
// assert.Contains(t, stringOrSlice, substringOrElement [, message [, format-args]])
//
// assert.NotContains(t, stringOrSlice, substringOrElement [, message [, format-args]])
//
// assert.Panics(t, func(){
//
// // call code that should panic
//
// } [, message [, format-args]])
//
// assert.NotPanics(t, func(){
//
// // call code that should not panic
//
// } [, message [, format-args]])
//
// assert.WithinDuration(t, timeA, timeB, deltaTime, [, message [, format-args]])
//
// assert.InDelta(t, numA, numB, delta, [, message [, format-args]])
//
// assert.InEpsilon(t, numA, numB, epsilon, [, message [, format-args]])
//
// assert package contains Assertions object. it has assertion methods.
//
// Here is an overview of the assert functions:
// assert.Equal(expected, actual [, message [, format-args])
//
// assert.NotEqual(notExpected, actual [, message [, format-args]])
//
// assert.True(actualBool [, message [, format-args]])
//
// assert.False(actualBool [, message [, format-args]])
//
// assert.Nil(actualObject [, message [, format-args]])
//
// assert.NotNil(actualObject [, message [, format-args]])
//
// assert.Empty(actualObject [, message [, format-args]])
//
// assert.NotEmpty(actualObject [, message [, format-args]])
//
// assert.Len(actualObject, expectedLength, [, message [, format-args]])
//
// assert.Error(errorObject [, message [, format-args]])
//
// assert.NoError(errorObject [, message [, format-args]])
//
// assert.EqualError(theError, errString [, message [, format-args]])
//
// assert.Implements((*MyInterface)(nil), new(MyObject) [,message [, format-args]])
//
// assert.IsType(expectedObject, actualObject [, message [, format-args]])
//
// assert.Contains(stringOrSlice, substringOrElement [, message [, format-args]])
//
// assert.NotContains(stringOrSlice, substringOrElement [, message [, format-args]])
//
// assert.Panics(func(){
//
// // call code that should panic
//
// } [, message [, format-args]])
//
// assert.NotPanics(func(){
//
// // call code that should not panic
//
// } [, message [, format-args]])
//
// assert.WithinDuration(timeA, timeB, deltaTime, [, message [, format-args]])
//
// assert.InDelta(numA, numB, delta, [, message [, format-args]])
//
// assert.InEpsilon(numA, numB, epsilon, [, message [, format-args]])
package assert

View File

@@ -1,10 +0,0 @@
package assert
import (
"errors"
)
// AnError is an error instance useful for testing. If the code does not care
// about error specifics, and only needs to return the error for example, this
// error should be used to make the test code more readable.
var AnError = errors.New("assert.AnError general error for testing")

View File

@@ -1,262 +0,0 @@
package assert
import "time"
type Assertions struct {
t TestingT
}
func New(t TestingT) *Assertions {
return &Assertions{
t: t,
}
}
// Fail reports a failure through
func (a *Assertions) Fail(failureMessage string, msgAndArgs ...interface{}) bool {
return Fail(a.t, failureMessage, msgAndArgs...)
}
// Implements asserts that an object is implemented by the specified interface.
//
// assert.Implements((*MyInterface)(nil), new(MyObject), "MyObject")
func (a *Assertions) Implements(interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {
return Implements(a.t, interfaceObject, object, msgAndArgs...)
}
// IsType asserts that the specified objects are of the same type.
func (a *Assertions) IsType(expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool {
return IsType(a.t, expectedType, object, msgAndArgs...)
}
// Equal asserts that two objects are equal.
//
// assert.Equal(123, 123, "123 and 123 should be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Equal(expected, actual interface{}, msgAndArgs ...interface{}) bool {
return Equal(a.t, expected, actual, msgAndArgs...)
}
// EqualValues asserts that two objects are equal or convertable to the same types
// and equal.
//
// assert.EqualValues(uint32(123), int32(123), "123 and 123 should be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) EqualValues(expected, actual interface{}, msgAndArgs ...interface{}) bool {
return EqualValues(a.t, expected, actual, msgAndArgs...)
}
// Exactly asserts that two objects are equal is value and type.
//
// assert.Exactly(int32(123), int64(123), "123 and 123 should NOT be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Exactly(expected, actual interface{}, msgAndArgs ...interface{}) bool {
return Exactly(a.t, expected, actual, msgAndArgs...)
}
// NotNil asserts that the specified object is not nil.
//
// assert.NotNil(err, "err should be something")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotNil(object interface{}, msgAndArgs ...interface{}) bool {
return NotNil(a.t, object, msgAndArgs...)
}
// Nil asserts that the specified object is nil.
//
// assert.Nil(err, "err should be nothing")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Nil(object interface{}, msgAndArgs ...interface{}) bool {
return Nil(a.t, object, msgAndArgs...)
}
// Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or a
// slice with len == 0.
//
// assert.Empty(obj)
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Empty(object interface{}, msgAndArgs ...interface{}) bool {
return Empty(a.t, object, msgAndArgs...)
}
// Empty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or a
// slice with len == 0.
//
// if assert.NotEmpty(obj) {
// assert.Equal("two", obj[1])
// }
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotEmpty(object interface{}, msgAndArgs ...interface{}) bool {
return NotEmpty(a.t, object, msgAndArgs...)
}
// Len asserts that the specified object has specific length.
// Len also fails if the object has a type that len() not accept.
//
// assert.Len(mySlice, 3, "The size of slice is not 3")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Len(object interface{}, length int, msgAndArgs ...interface{}) bool {
return Len(a.t, object, length, msgAndArgs...)
}
// True asserts that the specified value is true.
//
// assert.True(myBool, "myBool should be true")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) True(value bool, msgAndArgs ...interface{}) bool {
return True(a.t, value, msgAndArgs...)
}
// False asserts that the specified value is true.
//
// assert.False(myBool, "myBool should be false")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) False(value bool, msgAndArgs ...interface{}) bool {
return False(a.t, value, msgAndArgs...)
}
// NotEqual asserts that the specified values are NOT equal.
//
// assert.NotEqual(obj1, obj2, "two objects shouldn't be equal")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotEqual(expected, actual interface{}, msgAndArgs ...interface{}) bool {
return NotEqual(a.t, expected, actual, msgAndArgs...)
}
// Contains asserts that the specified string contains the specified substring.
//
// assert.Contains("Hello World", "World", "But 'Hello World' does contain 'World'")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Contains(s, contains interface{}, msgAndArgs ...interface{}) bool {
return Contains(a.t, s, contains, msgAndArgs...)
}
// NotContains asserts that the specified string does NOT contain the specified substring.
//
// assert.NotContains("Hello World", "Earth", "But 'Hello World' does NOT contain 'Earth'")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotContains(s, contains interface{}, msgAndArgs ...interface{}) bool {
return NotContains(a.t, s, contains, msgAndArgs...)
}
// Uses a Comparison to assert a complex condition.
func (a *Assertions) Condition(comp Comparison, msgAndArgs ...interface{}) bool {
return Condition(a.t, comp, msgAndArgs...)
}
// Panics asserts that the code inside the specified PanicTestFunc panics.
//
// assert.Panics(func(){
// GoCrazy()
// }, "Calling GoCrazy() should panic")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Panics(f PanicTestFunc, msgAndArgs ...interface{}) bool {
return Panics(a.t, f, msgAndArgs...)
}
// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.
//
// assert.NotPanics(func(){
// RemainCalm()
// }, "Calling RemainCalm() should NOT panic")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotPanics(f PanicTestFunc, msgAndArgs ...interface{}) bool {
return NotPanics(a.t, f, msgAndArgs...)
}
// WithinDuration asserts that the two times are within duration delta of each other.
//
// assert.WithinDuration(time.Now(), time.Now(), 10*time.Second, "The difference should not be more than 10s")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) WithinDuration(expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {
return WithinDuration(a.t, expected, actual, delta, msgAndArgs...)
}
// InDelta asserts that the two numerals are within delta of each other.
//
// assert.InDelta(t, math.Pi, (22 / 7.0), 0.01)
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) InDelta(expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {
return InDelta(a.t, expected, actual, delta, msgAndArgs...)
}
// InEpsilon asserts that expected and actual have a relative error less than epsilon
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) InEpsilon(expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {
return InEpsilon(a.t, expected, actual, epsilon, msgAndArgs...)
}
// NoError asserts that a function returned no error (i.e. `nil`).
//
// actualObj, err := SomeFunction()
// if assert.NoError(err) {
// assert.Equal(actualObj, expectedObj)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NoError(theError error, msgAndArgs ...interface{}) bool {
return NoError(a.t, theError, msgAndArgs...)
}
// Error asserts that a function returned an error (i.e. not `nil`).
//
// actualObj, err := SomeFunction()
// if assert.Error(err, "An error was expected") {
// assert.Equal(err, expectedError)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Error(theError error, msgAndArgs ...interface{}) bool {
return Error(a.t, theError, msgAndArgs...)
}
// EqualError asserts that a function returned an error (i.e. not `nil`)
// and that it is equal to the provided error.
//
// actualObj, err := SomeFunction()
// if assert.Error(err, "An error was expected") {
// assert.Equal(err, expectedError)
// }
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) EqualError(theError error, errString string, msgAndArgs ...interface{}) bool {
return EqualError(a.t, theError, errString, msgAndArgs...)
}
// Regexp asserts that a specified regexp matches a string.
//
// assert.Regexp(t, regexp.MustCompile("start"), "it's starting")
// assert.Regexp(t, "start...$", "it's not starting")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) Regexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
return Regexp(a.t, rx, str, msgAndArgs...)
}
// NotRegexp asserts that a specified regexp does not match a string.
//
// assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting")
// assert.NotRegexp(t, "^start", "it's not starting")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) NotRegexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
return NotRegexp(a.t, rx, str, msgAndArgs...)
}

View File

@@ -1,526 +0,0 @@
package assert
import (
"errors"
"regexp"
"testing"
"time"
)
func TestImplementsWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.Implements((*AssertionTesterInterface)(nil), new(AssertionTesterConformingObject)) {
t.Error("Implements method should return true: AssertionTesterConformingObject implements AssertionTesterInterface")
}
if assert.Implements((*AssertionTesterInterface)(nil), new(AssertionTesterNonConformingObject)) {
t.Error("Implements method should return false: AssertionTesterNonConformingObject does not implements AssertionTesterInterface")
}
}
func TestIsTypeWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.IsType(new(AssertionTesterConformingObject), new(AssertionTesterConformingObject)) {
t.Error("IsType should return true: AssertionTesterConformingObject is the same type as AssertionTesterConformingObject")
}
if assert.IsType(new(AssertionTesterConformingObject), new(AssertionTesterNonConformingObject)) {
t.Error("IsType should return false: AssertionTesterConformingObject is not the same type as AssertionTesterNonConformingObject")
}
}
func TestEqualWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.Equal("Hello World", "Hello World") {
t.Error("Equal should return true")
}
if !assert.Equal(123, 123) {
t.Error("Equal should return true")
}
if !assert.Equal(123.5, 123.5) {
t.Error("Equal should return true")
}
if !assert.Equal([]byte("Hello World"), []byte("Hello World")) {
t.Error("Equal should return true")
}
if !assert.Equal(nil, nil) {
t.Error("Equal should return true")
}
}
func TestEqualValuesWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.EqualValues(uint32(10), int32(10)) {
t.Error("EqualValues should return true")
}
}
func TestNotNilWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.NotNil(new(AssertionTesterConformingObject)) {
t.Error("NotNil should return true: object is not nil")
}
if assert.NotNil(nil) {
t.Error("NotNil should return false: object is nil")
}
}
func TestNilWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.Nil(nil) {
t.Error("Nil should return true: object is nil")
}
if assert.Nil(new(AssertionTesterConformingObject)) {
t.Error("Nil should return false: object is not nil")
}
}
func TestTrueWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.True(true) {
t.Error("True should return true")
}
if assert.True(false) {
t.Error("True should return false")
}
}
func TestFalseWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.False(false) {
t.Error("False should return true")
}
if assert.False(true) {
t.Error("False should return false")
}
}
func TestExactlyWrapper(t *testing.T) {
assert := New(new(testing.T))
a := float32(1)
b := float64(1)
c := float32(1)
d := float32(2)
if assert.Exactly(a, b) {
t.Error("Exactly should return false")
}
if assert.Exactly(a, d) {
t.Error("Exactly should return false")
}
if !assert.Exactly(a, c) {
t.Error("Exactly should return true")
}
if assert.Exactly(nil, a) {
t.Error("Exactly should return false")
}
if assert.Exactly(a, nil) {
t.Error("Exactly should return false")
}
}
func TestNotEqualWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.NotEqual("Hello World", "Hello World!") {
t.Error("NotEqual should return true")
}
if !assert.NotEqual(123, 1234) {
t.Error("NotEqual should return true")
}
if !assert.NotEqual(123.5, 123.55) {
t.Error("NotEqual should return true")
}
if !assert.NotEqual([]byte("Hello World"), []byte("Hello World!")) {
t.Error("NotEqual should return true")
}
if !assert.NotEqual(nil, new(AssertionTesterConformingObject)) {
t.Error("NotEqual should return true")
}
}
func TestContainsWrapper(t *testing.T) {
assert := New(new(testing.T))
list := []string{"Foo", "Bar"}
if !assert.Contains("Hello World", "Hello") {
t.Error("Contains should return true: \"Hello World\" contains \"Hello\"")
}
if assert.Contains("Hello World", "Salut") {
t.Error("Contains should return false: \"Hello World\" does not contain \"Salut\"")
}
if !assert.Contains(list, "Foo") {
t.Error("Contains should return true: \"[\"Foo\", \"Bar\"]\" contains \"Foo\"")
}
if assert.Contains(list, "Salut") {
t.Error("Contains should return false: \"[\"Foo\", \"Bar\"]\" does not contain \"Salut\"")
}
}
func TestNotContainsWrapper(t *testing.T) {
assert := New(new(testing.T))
list := []string{"Foo", "Bar"}
if !assert.NotContains("Hello World", "Hello!") {
t.Error("NotContains should return true: \"Hello World\" does not contain \"Hello!\"")
}
if assert.NotContains("Hello World", "Hello") {
t.Error("NotContains should return false: \"Hello World\" contains \"Hello\"")
}
if !assert.NotContains(list, "Foo!") {
t.Error("NotContains should return true: \"[\"Foo\", \"Bar\"]\" does not contain \"Foo!\"")
}
if assert.NotContains(list, "Foo") {
t.Error("NotContains should return false: \"[\"Foo\", \"Bar\"]\" contains \"Foo\"")
}
}
func TestConditionWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.Condition(func() bool { return true }, "Truth") {
t.Error("Condition should return true")
}
if assert.Condition(func() bool { return false }, "Lie") {
t.Error("Condition should return false")
}
}
func TestDidPanicWrapper(t *testing.T) {
if funcDidPanic, _ := didPanic(func() {
panic("Panic!")
}); !funcDidPanic {
t.Error("didPanic should return true")
}
if funcDidPanic, _ := didPanic(func() {
}); funcDidPanic {
t.Error("didPanic should return false")
}
}
func TestPanicsWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.Panics(func() {
panic("Panic!")
}) {
t.Error("Panics should return true")
}
if assert.Panics(func() {
}) {
t.Error("Panics should return false")
}
}
func TestNotPanicsWrapper(t *testing.T) {
assert := New(new(testing.T))
if !assert.NotPanics(func() {
}) {
t.Error("NotPanics should return true")
}
if assert.NotPanics(func() {
panic("Panic!")
}) {
t.Error("NotPanics should return false")
}
}
func TestEqualWrapper_Funcs(t *testing.T) {
assert := New(t)
type f func() int
var f1 f = func() int { return 1 }
var f2 f = func() int { return 2 }
var f1_copy f = f1
assert.Equal(f1_copy, f1, "Funcs are the same and should be considered equal")
assert.NotEqual(f1, f2, "f1 and f2 are different")
}
func TestNoErrorWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
// start with a nil error
var err error = nil
assert.True(mockAssert.NoError(err), "NoError should return True for nil arg")
// now set an error
err = errors.New("Some error")
assert.False(mockAssert.NoError(err), "NoError with error should return False")
}
func TestErrorWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
// start with a nil error
var err error = nil
assert.False(mockAssert.Error(err), "Error should return False for nil arg")
// now set an error
err = errors.New("Some error")
assert.True(mockAssert.Error(err), "Error with error should return True")
}
func TestEqualErrorWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
// start with a nil error
var err error
assert.False(mockAssert.EqualError(err, ""),
"EqualError should return false for nil arg")
// now set an error
err = errors.New("some error")
assert.False(mockAssert.EqualError(err, "Not some error"),
"EqualError should return false for different error string")
assert.True(mockAssert.EqualError(err, "some error"),
"EqualError should return true")
}
func TestEmptyWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
assert.True(mockAssert.Empty(""), "Empty string is empty")
assert.True(mockAssert.Empty(nil), "Nil is empty")
assert.True(mockAssert.Empty([]string{}), "Empty string array is empty")
assert.True(mockAssert.Empty(0), "Zero int value is empty")
assert.True(mockAssert.Empty(false), "False value is empty")
assert.False(mockAssert.Empty("something"), "Non Empty string is not empty")
assert.False(mockAssert.Empty(errors.New("something")), "Non nil object is not empty")
assert.False(mockAssert.Empty([]string{"something"}), "Non empty string array is not empty")
assert.False(mockAssert.Empty(1), "Non-zero int value is not empty")
assert.False(mockAssert.Empty(true), "True value is not empty")
}
func TestNotEmptyWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
assert.False(mockAssert.NotEmpty(""), "Empty string is empty")
assert.False(mockAssert.NotEmpty(nil), "Nil is empty")
assert.False(mockAssert.NotEmpty([]string{}), "Empty string array is empty")
assert.False(mockAssert.NotEmpty(0), "Zero int value is empty")
assert.False(mockAssert.NotEmpty(false), "False value is empty")
assert.True(mockAssert.NotEmpty("something"), "Non Empty string is not empty")
assert.True(mockAssert.NotEmpty(errors.New("something")), "Non nil object is not empty")
assert.True(mockAssert.NotEmpty([]string{"something"}), "Non empty string array is not empty")
assert.True(mockAssert.NotEmpty(1), "Non-zero int value is not empty")
assert.True(mockAssert.NotEmpty(true), "True value is not empty")
}
func TestLenWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
assert.False(mockAssert.Len(nil, 0), "nil does not have length")
assert.False(mockAssert.Len(0, 0), "int does not have length")
assert.False(mockAssert.Len(true, 0), "true does not have length")
assert.False(mockAssert.Len(false, 0), "false does not have length")
assert.False(mockAssert.Len('A', 0), "Rune does not have length")
assert.False(mockAssert.Len(struct{}{}, 0), "Struct does not have length")
ch := make(chan int, 5)
ch <- 1
ch <- 2
ch <- 3
cases := []struct {
v interface{}
l int
}{
{[]int{1, 2, 3}, 3},
{[...]int{1, 2, 3}, 3},
{"ABC", 3},
{map[int]int{1: 2, 2: 4, 3: 6}, 3},
{ch, 3},
{[]int{}, 0},
{map[int]int{}, 0},
{make(chan int), 0},
{[]int(nil), 0},
{map[int]int(nil), 0},
{(chan int)(nil), 0},
}
for _, c := range cases {
assert.True(mockAssert.Len(c.v, c.l), "%#v have %d items", c.v, c.l)
}
}
func TestWithinDurationWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
a := time.Now()
b := a.Add(10 * time.Second)
assert.True(mockAssert.WithinDuration(a, b, 10*time.Second), "A 10s difference is within a 10s time difference")
assert.True(mockAssert.WithinDuration(b, a, 10*time.Second), "A 10s difference is within a 10s time difference")
assert.False(mockAssert.WithinDuration(a, b, 9*time.Second), "A 10s difference is not within a 9s time difference")
assert.False(mockAssert.WithinDuration(b, a, 9*time.Second), "A 10s difference is not within a 9s time difference")
assert.False(mockAssert.WithinDuration(a, b, -9*time.Second), "A 10s difference is not within a 9s time difference")
assert.False(mockAssert.WithinDuration(b, a, -9*time.Second), "A 10s difference is not within a 9s time difference")
assert.False(mockAssert.WithinDuration(a, b, -11*time.Second), "A 10s difference is not within a 9s time difference")
assert.False(mockAssert.WithinDuration(b, a, -11*time.Second), "A 10s difference is not within a 9s time difference")
}
func TestInDeltaWrapper(t *testing.T) {
assert := New(new(testing.T))
True(t, assert.InDelta(1.001, 1, 0.01), "|1.001 - 1| <= 0.01")
True(t, assert.InDelta(1, 1.001, 0.01), "|1 - 1.001| <= 0.01")
True(t, assert.InDelta(1, 2, 1), "|1 - 2| <= 1")
False(t, assert.InDelta(1, 2, 0.5), "Expected |1 - 2| <= 0.5 to fail")
False(t, assert.InDelta(2, 1, 0.5), "Expected |2 - 1| <= 0.5 to fail")
False(t, assert.InDelta("", nil, 1), "Expected non numerals to fail")
cases := []struct {
a, b interface{}
delta float64
}{
{uint8(2), uint8(1), 1},
{uint16(2), uint16(1), 1},
{uint32(2), uint32(1), 1},
{uint64(2), uint64(1), 1},
{int(2), int(1), 1},
{int8(2), int8(1), 1},
{int16(2), int16(1), 1},
{int32(2), int32(1), 1},
{int64(2), int64(1), 1},
{float32(2), float32(1), 1},
{float64(2), float64(1), 1},
}
for _, tc := range cases {
True(t, assert.InDelta(tc.a, tc.b, tc.delta), "Expected |%V - %V| <= %v", tc.a, tc.b, tc.delta)
}
}
func TestInEpsilonWrapper(t *testing.T) {
assert := New(new(testing.T))
cases := []struct {
a, b interface{}
epsilon float64
}{
{uint8(2), uint16(2), .001},
{2.1, 2.2, 0.1},
{2.2, 2.1, 0.1},
{-2.1, -2.2, 0.1},
{-2.2, -2.1, 0.1},
{uint64(100), uint8(101), 0.01},
{0.1, -0.1, 2},
}
for _, tc := range cases {
True(t, assert.InEpsilon(tc.a, tc.b, tc.epsilon, "Expected %V and %V to have a relative difference of %v", tc.a, tc.b, tc.epsilon))
}
cases = []struct {
a, b interface{}
epsilon float64
}{
{uint8(2), int16(-2), .001},
{uint64(100), uint8(102), 0.01},
{2.1, 2.2, 0.001},
{2.2, 2.1, 0.001},
{2.1, -2.2, 1},
{2.1, "bla-bla", 0},
{0.1, -0.1, 1.99},
}
for _, tc := range cases {
False(t, assert.InEpsilon(tc.a, tc.b, tc.epsilon, "Expected %V and %V to have a relative difference of %v", tc.a, tc.b, tc.epsilon))
}
}
func TestRegexpWrapper(t *testing.T) {
assert := New(new(testing.T))
cases := []struct {
rx, str string
}{
{"^start", "start of the line"},
{"end$", "in the end"},
{"[0-9]{3}[.-]?[0-9]{2}[.-]?[0-9]{2}", "My phone number is 650.12.34"},
}
for _, tc := range cases {
True(t, assert.Regexp(tc.rx, tc.str))
True(t, assert.Regexp(regexp.MustCompile(tc.rx), tc.str))
False(t, assert.NotRegexp(tc.rx, tc.str))
False(t, assert.NotRegexp(regexp.MustCompile(tc.rx), tc.str))
}
cases = []struct {
rx, str string
}{
{"^asdfastart", "Not the start of the line"},
{"end$", "in the end."},
{"[0-9]{3}[.-]?[0-9]{2}[.-]?[0-9]{2}", "My phone number is 650.12a.34"},
}
for _, tc := range cases {
False(t, assert.Regexp(tc.rx, tc.str), "Expected \"%s\" to not match \"%s\"", tc.rx, tc.str)
False(t, assert.Regexp(regexp.MustCompile(tc.rx), tc.str))
True(t, assert.NotRegexp(tc.rx, tc.str))
True(t, assert.NotRegexp(regexp.MustCompile(tc.rx), tc.str))
}
}

View File

@@ -1,157 +0,0 @@
package assert
import (
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"strings"
)
// httpCode is a helper that returns HTTP code of the response. It returns -1
// if building a new request fails.
func httpCode(handler http.HandlerFunc, mode, url string, values url.Values) int {
w := httptest.NewRecorder()
req, err := http.NewRequest(mode, url+"?"+values.Encode(), nil)
if err != nil {
return -1
}
handler(w, req)
return w.Code
}
// HTTPSuccess asserts that a specified handler returns a success status code.
//
// assert.HTTPSuccess(t, myHandler, "POST", "http://www.google.com", nil)
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPSuccess(t TestingT, handler http.HandlerFunc, mode, url string, values url.Values) bool {
code := httpCode(handler, mode, url, values)
if code == -1 {
return false
}
return code >= http.StatusOK && code <= http.StatusPartialContent
}
// HTTPRedirect asserts that a specified handler returns a redirect status code.
//
// assert.HTTPRedirect(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPRedirect(t TestingT, handler http.HandlerFunc, mode, url string, values url.Values) bool {
code := httpCode(handler, mode, url, values)
if code == -1 {
return false
}
return code >= http.StatusMultipleChoices && code <= http.StatusTemporaryRedirect
}
// HTTPError asserts that a specified handler returns an error status code.
//
// assert.HTTPError(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPError(t TestingT, handler http.HandlerFunc, mode, url string, values url.Values) bool {
code := httpCode(handler, mode, url, values)
if code == -1 {
return false
}
return code >= http.StatusBadRequest
}
// HttpBody is a helper that returns HTTP body of the response. It returns
// empty string if building a new request fails.
func HttpBody(handler http.HandlerFunc, mode, url string, values url.Values) string {
w := httptest.NewRecorder()
req, err := http.NewRequest(mode, url+"?"+values.Encode(), nil)
if err != nil {
return ""
}
handler(w, req)
return w.Body.String()
}
// HTTPBodyContains asserts that a specified handler returns a
// body that contains a string.
//
// assert.HTTPBodyContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky")
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPBodyContains(t TestingT, handler http.HandlerFunc, mode, url string, values url.Values, str interface{}) bool {
body := HttpBody(handler, mode, url, values)
contains := strings.Contains(body, fmt.Sprint(str))
if !contains {
Fail(t, fmt.Sprintf("Expected response body for \"%s\" to contain \"%s\" but found \"%s\"", url+"?"+values.Encode(), str, body))
}
return contains
}
// HTTPBodyNotContains asserts that a specified handler returns a
// body that does not contain a string.
//
// assert.HTTPBodyNotContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky")
//
// Returns whether the assertion was successful (true) or not (false).
func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, mode, url string, values url.Values, str interface{}) bool {
body := HttpBody(handler, mode, url, values)
contains := strings.Contains(body, fmt.Sprint(str))
if contains {
Fail(t, "Expected response body for %s to NOT contain \"%s\" but found \"%s\"", url+"?"+values.Encode(), str, body)
}
return !contains
}
//
// Assertions Wrappers
//
// HTTPSuccess asserts that a specified handler returns a success status code.
//
// assert.HTTPSuccess(myHandler, "POST", "http://www.google.com", nil)
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) HTTPSuccess(handler http.HandlerFunc, mode, url string, values url.Values) bool {
return HTTPSuccess(a.t, handler, mode, url, values)
}
// HTTPRedirect asserts that a specified handler returns a redirect status code.
//
// assert.HTTPRedirect(myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) HTTPRedirect(handler http.HandlerFunc, mode, url string, values url.Values) bool {
return HTTPRedirect(a.t, handler, mode, url, values)
}
// HTTPError asserts that a specified handler returns an error status code.
//
// assert.HTTPError(myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}}
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) HTTPError(handler http.HandlerFunc, mode, url string, values url.Values) bool {
return HTTPError(a.t, handler, mode, url, values)
}
// HTTPBodyContains asserts that a specified handler returns a
// body that contains a string.
//
// assert.HTTPBodyContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, mode, url string, values url.Values, str interface{}) bool {
return HTTPBodyContains(a.t, handler, mode, url, values, str)
}
// HTTPBodyNotContains asserts that a specified handler returns a
// body that does not contain a string.
//
// assert.HTTPBodyNotContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky")
//
// Returns whether the assertion was successful (true) or not (false).
func (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, mode, url string, values url.Values, str interface{}) bool {
return HTTPBodyNotContains(a.t, handler, mode, url, values, str)
}

View File

@@ -1,86 +0,0 @@
package assert
import (
"fmt"
"net/http"
"net/url"
"testing"
)
func httpOK(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}
func httpRedirect(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusTemporaryRedirect)
}
func httpError(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
}
func TestHTTPStatuses(t *testing.T) {
assert := New(t)
mockT := new(testing.T)
assert.Equal(HTTPSuccess(mockT, httpOK, "GET", "/", nil), true)
assert.Equal(HTTPSuccess(mockT, httpRedirect, "GET", "/", nil), false)
assert.Equal(HTTPSuccess(mockT, httpError, "GET", "/", nil), false)
assert.Equal(HTTPRedirect(mockT, httpOK, "GET", "/", nil), false)
assert.Equal(HTTPRedirect(mockT, httpRedirect, "GET", "/", nil), true)
assert.Equal(HTTPRedirect(mockT, httpError, "GET", "/", nil), false)
assert.Equal(HTTPError(mockT, httpOK, "GET", "/", nil), false)
assert.Equal(HTTPError(mockT, httpRedirect, "GET", "/", nil), false)
assert.Equal(HTTPError(mockT, httpError, "GET", "/", nil), true)
}
func TestHTTPStatusesWrapper(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
assert.Equal(mockAssert.HTTPSuccess(httpOK, "GET", "/", nil), true)
assert.Equal(mockAssert.HTTPSuccess(httpRedirect, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPSuccess(httpError, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPRedirect(httpOK, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPRedirect(httpRedirect, "GET", "/", nil), true)
assert.Equal(mockAssert.HTTPRedirect(httpError, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPError(httpOK, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPError(httpRedirect, "GET", "/", nil), false)
assert.Equal(mockAssert.HTTPError(httpError, "GET", "/", nil), true)
}
func httpHelloName(w http.ResponseWriter, r *http.Request) {
name := r.FormValue("name")
w.Write([]byte(fmt.Sprintf("Hello, %s!", name)))
}
func TestHttpBody(t *testing.T) {
assert := New(t)
mockT := new(testing.T)
assert.True(HTTPBodyContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "Hello, World!"))
assert.True(HTTPBodyContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "World"))
assert.False(HTTPBodyContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "world"))
assert.False(HTTPBodyNotContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "Hello, World!"))
assert.False(HTTPBodyNotContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "World"))
assert.True(HTTPBodyNotContains(mockT, httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "world"))
}
func TestHttpBodyWrappers(t *testing.T) {
assert := New(t)
mockAssert := New(new(testing.T))
assert.True(mockAssert.HTTPBodyContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "Hello, World!"))
assert.True(mockAssert.HTTPBodyContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "World"))
assert.False(mockAssert.HTTPBodyContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "world"))
assert.False(mockAssert.HTTPBodyNotContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "Hello, World!"))
assert.False(mockAssert.HTTPBodyNotContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "World"))
assert.True(mockAssert.HTTPBodyNotContains(httpHelloName, "GET", "/", url.Values{"name": []string{"World"}}, "world"))
}

View File

@@ -1,43 +0,0 @@
// Provides a system by which it is possible to mock your objects and verify calls are happening as expected.
//
// Example Usage
//
// The mock package provides an object, Mock, that tracks activity on another object. It is usually
// embedded into a test object as shown below:
//
// type MyTestObject struct {
// // add a Mock object instance
// mock.Mock
//
// // other fields go here as normal
// }
//
// When implementing the methods of an interface, you wire your functions up
// to call the Mock.Called(args...) method, and return the appropriate values.
//
// For example, to mock a method that saves the name and age of a person and returns
// the year of their birth or an error, you might write this:
//
// func (o *MyTestObject) SavePersonDetails(firstname, lastname string, age int) (int, error) {
// args := o.Called(firstname, lastname, age)
// return args.Int(0), args.Error(1)
// }
//
// The Int, Error and Bool methods are examples of strongly typed getters that take the argument
// index position. Given this argument list:
//
// (12, true, "Something")
//
// You could read them out strongly typed like this:
//
// args.Int(0)
// args.Bool(1)
// args.String(2)
//
// For objects of your own type, use the generic Arguments.Get(index) method and make a type assertion:
//
// return args.Get(0).(*MyObject), args.Get(1).(*AnotherObjectOfMine)
//
// This may cause a panic if the object you are getting is nil (the type assertion will fail), in those
// cases you should check for nil first.
package mock

View File

@@ -1,510 +0,0 @@
package mock
import (
"fmt"
"github.com/stretchr/objx"
"github.com/stretchr/testify/assert"
"reflect"
"runtime"
"strings"
"sync"
)
// TestingT is an interface wrapper around *testing.T
type TestingT interface {
Logf(format string, args ...interface{})
Errorf(format string, args ...interface{})
}
/*
Call
*/
// Call represents a method call and is used for setting expectations,
// as well as recording activity.
type Call struct {
// The name of the method that was or will be called.
Method string
// Holds the arguments of the method.
Arguments Arguments
// Holds the arguments that should be returned when
// this method is called.
ReturnArguments Arguments
// The number of times to return the return arguments when setting
// expectations. 0 means to always return the value.
Repeatability int
}
// Mock is the workhorse used to track activity on another object.
// For an example of its usage, refer to the "Example Usage" section at the top of this document.
type Mock struct {
// The method name that is currently
// being referred to by the On method.
onMethodName string
// An array of the arguments that are
// currently being referred to by the On method.
onMethodArguments Arguments
// Represents the calls that are expected of
// an object.
ExpectedCalls []Call
// Holds the calls that were made to this mocked object.
Calls []Call
// TestData holds any data that might be useful for testing. Testify ignores
// this data completely allowing you to do whatever you like with it.
testData objx.Map
mutex sync.Mutex
}
// TestData holds any data that might be useful for testing. Testify ignores
// this data completely allowing you to do whatever you like with it.
func (m *Mock) TestData() objx.Map {
if m.testData == nil {
m.testData = make(objx.Map)
}
return m.testData
}
/*
Setting expectations
*/
// On starts a description of an expectation of the specified method
// being called.
//
// Mock.On("MyMethod", arg1, arg2)
func (m *Mock) On(methodName string, arguments ...interface{}) *Mock {
m.onMethodName = methodName
m.onMethodArguments = arguments
return m
}
// Return finishes a description of an expectation of the method (and arguments)
// specified in the most recent On method call.
//
// Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2)
func (m *Mock) Return(returnArguments ...interface{}) *Mock {
m.ExpectedCalls = append(m.ExpectedCalls, Call{m.onMethodName, m.onMethodArguments, returnArguments, 0})
return m
}
// Once indicates that that the mock should only return the value once.
//
// Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Once()
func (m *Mock) Once() {
m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = 1
}
// Twice indicates that that the mock should only return the value twice.
//
// Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Twice()
func (m *Mock) Twice() {
m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = 2
}
// Times indicates that that the mock should only return the indicated number
// of times.
//
// Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Times(5)
func (m *Mock) Times(i int) {
m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = i
}
/*
Recording and responding to activity
*/
func (m *Mock) findExpectedCall(method string, arguments ...interface{}) (int, *Call) {
for i, call := range m.ExpectedCalls {
if call.Method == method && call.Repeatability > -1 {
_, diffCount := call.Arguments.Diff(arguments)
if diffCount == 0 {
return i, &call
}
}
}
return -1, nil
}
func (m *Mock) findClosestCall(method string, arguments ...interface{}) (bool, *Call) {
diffCount := 0
var closestCall *Call = nil
for _, call := range m.ExpectedCalls {
if call.Method == method {
_, tempDiffCount := call.Arguments.Diff(arguments)
if tempDiffCount < diffCount || diffCount == 0 {
diffCount = tempDiffCount
closestCall = &call
}
}
}
if closestCall == nil {
return false, nil
}
return true, closestCall
}
func callString(method string, arguments Arguments, includeArgumentValues bool) string {
var argValsString string = ""
if includeArgumentValues {
var argVals []string
for argIndex, arg := range arguments {
argVals = append(argVals, fmt.Sprintf("%d: %v", argIndex, arg))
}
argValsString = fmt.Sprintf("\n\t\t%s", strings.Join(argVals, "\n\t\t"))
}
return fmt.Sprintf("%s(%s)%s", method, arguments.String(), argValsString)
}
// Called tells the mock object that a method has been called, and gets an array
// of arguments to return. Panics if the call is unexpected (i.e. not preceeded by
// appropriate .On .Return() calls)
func (m *Mock) Called(arguments ...interface{}) Arguments {
defer m.mutex.Unlock()
m.mutex.Lock()
// get the calling function's name
pc, _, _, ok := runtime.Caller(1)
if !ok {
panic("Couldn't get the caller information")
}
functionPath := runtime.FuncForPC(pc).Name()
parts := strings.Split(functionPath, ".")
functionName := parts[len(parts)-1]
found, call := m.findExpectedCall(functionName, arguments...)
switch {
case found < 0:
// we have to fail here - because we don't know what to do
// as the return arguments. This is because:
//
// a) this is a totally unexpected call to this method,
// b) the arguments are not what was expected, or
// c) the developer has forgotten to add an accompanying On...Return pair.
closestFound, closestCall := m.findClosestCall(functionName, arguments...)
if closestFound {
panic(fmt.Sprintf("\n\nmock: Unexpected Method Call\n-----------------------------\n\n%s\n\nThe closest call I have is: \n\n%s\n", callString(functionName, arguments, true), callString(functionName, closestCall.Arguments, true)))
} else {
panic(fmt.Sprintf("\nassert: mock: I don't know what to return because the method call was unexpected.\n\tEither do Mock.On(\"%s\").Return(...) first, or remove the %s() call.\n\tThis method was unexpected:\n\t\t%s\n\tat: %s", functionName, functionName, callString(functionName, arguments, true), assert.CallerInfo()))
}
case call.Repeatability == 1:
call.Repeatability = -1
m.ExpectedCalls[found] = *call
case call.Repeatability > 1:
call.Repeatability -= 1
m.ExpectedCalls[found] = *call
}
// add the call
m.Calls = append(m.Calls, Call{functionName, arguments, make([]interface{}, 0), 0})
return call.ReturnArguments
}
/*
Assertions
*/
// AssertExpectationsForObjects asserts that everything specified with On and Return
// of the specified objects was in fact called as expected.
//
// Calls may have occurred in any order.
func AssertExpectationsForObjects(t TestingT, testObjects ...interface{}) bool {
var success bool = true
for _, obj := range testObjects {
mockObj := obj.(Mock)
success = success && mockObj.AssertExpectations(t)
}
return success
}
// AssertExpectations asserts that everything specified with On and Return was
// in fact called as expected. Calls may have occurred in any order.
func (m *Mock) AssertExpectations(t TestingT) bool {
var somethingMissing bool = false
var failedExpectations int = 0
// iterate through each expectation
for _, expectedCall := range m.ExpectedCalls {
switch {
case !m.methodWasCalled(expectedCall.Method, expectedCall.Arguments):
somethingMissing = true
failedExpectations++
t.Logf("\u274C\t%s(%s)", expectedCall.Method, expectedCall.Arguments.String())
case expectedCall.Repeatability > 0:
somethingMissing = true
failedExpectations++
default:
t.Logf("\u2705\t%s(%s)", expectedCall.Method, expectedCall.Arguments.String())
}
}
if somethingMissing {
t.Errorf("FAIL: %d out of %d expectation(s) were met.\n\tThe code you are testing needs to make %d more call(s).\n\tat: %s", len(m.ExpectedCalls)-failedExpectations, len(m.ExpectedCalls), failedExpectations, assert.CallerInfo())
}
return !somethingMissing
}
// AssertNumberOfCalls asserts that the method was called expectedCalls times.
func (m *Mock) AssertNumberOfCalls(t TestingT, methodName string, expectedCalls int) bool {
var actualCalls int = 0
for _, call := range m.Calls {
if call.Method == methodName {
actualCalls++
}
}
return assert.Equal(t, actualCalls, expectedCalls, fmt.Sprintf("Expected number of calls (%d) does not match the actual number of calls (%d).", expectedCalls, actualCalls))
}
// AssertCalled asserts that the method was called.
func (m *Mock) AssertCalled(t TestingT, methodName string, arguments ...interface{}) bool {
if !assert.True(t, m.methodWasCalled(methodName, arguments), fmt.Sprintf("The \"%s\" method should have been called with %d argument(s), but was not.", methodName, len(arguments))) {
t.Logf("%s", m.ExpectedCalls)
return false
}
return true
}
// AssertNotCalled asserts that the method was not called.
func (m *Mock) AssertNotCalled(t TestingT, methodName string, arguments ...interface{}) bool {
if !assert.False(t, m.methodWasCalled(methodName, arguments), fmt.Sprintf("The \"%s\" method was called with %d argument(s), but should NOT have been.", methodName, len(arguments))) {
t.Logf("%s", m.ExpectedCalls)
return false
}
return true
}
func (m *Mock) methodWasCalled(methodName string, expected []interface{}) bool {
for _, call := range m.Calls {
if call.Method == methodName {
_, differences := Arguments(expected).Diff(call.Arguments)
if differences == 0 {
// found the expected call
return true
}
}
}
// we didn't find the expected call
return false
}
/*
Arguments
*/
// Arguments holds an array of method arguments or return values.
type Arguments []interface{}
const (
// The "any" argument. Used in Diff and Assert when
// the argument being tested shouldn't be taken into consideration.
Anything string = "mock.Anything"
)
// AnythingOfTypeArgument is a string that contains the type of an argument
// for use when type checking. Used in Diff and Assert.
type AnythingOfTypeArgument string
// AnythingOfType returns an AnythingOfTypeArgument object containing the
// name of the type to check for. Used in Diff and Assert.
//
// For example:
// Assert(t, AnythingOfType("string"), AnythingOfType("int"))
func AnythingOfType(t string) AnythingOfTypeArgument {
return AnythingOfTypeArgument(t)
}
// Get Returns the argument at the specified index.
func (args Arguments) Get(index int) interface{} {
if index+1 > len(args) {
panic(fmt.Sprintf("assert: arguments: Cannot call Get(%d) because there are %d argument(s).", index, len(args)))
}
return args[index]
}
// Is gets whether the objects match the arguments specified.
func (args Arguments) Is(objects ...interface{}) bool {
for i, obj := range args {
if obj != objects[i] {
return false
}
}
return true
}
// Diff gets a string describing the differences between the arguments
// and the specified objects.
//
// Returns the diff string and number of differences found.
func (args Arguments) Diff(objects []interface{}) (string, int) {
var output string = "\n"
var differences int
var maxArgCount int = len(args)
if len(objects) > maxArgCount {
maxArgCount = len(objects)
}
for i := 0; i < maxArgCount; i++ {
var actual, expected interface{}
if len(objects) <= i {
actual = "(Missing)"
} else {
actual = objects[i]
}
if len(args) <= i {
expected = "(Missing)"
} else {
expected = args[i]
}
if reflect.TypeOf(expected) == reflect.TypeOf((*AnythingOfTypeArgument)(nil)).Elem() {
// type checking
if reflect.TypeOf(actual).Name() != string(expected.(AnythingOfTypeArgument)) && reflect.TypeOf(actual).String() != string(expected.(AnythingOfTypeArgument)) {
// not match
differences++
output = fmt.Sprintf("%s\t%d: \u274C type %s != type %s - %s\n", output, i, expected, reflect.TypeOf(actual).Name(), actual)
}
} else {
// normal checking
if assert.ObjectsAreEqual(expected, Anything) || assert.ObjectsAreEqual(actual, Anything) || assert.ObjectsAreEqual(actual, expected) {
// match
output = fmt.Sprintf("%s\t%d: \u2705 %s == %s\n", output, i, actual, expected)
} else {
// not match
differences++
output = fmt.Sprintf("%s\t%d: \u274C %s != %s\n", output, i, actual, expected)
}
}
}
if differences == 0 {
return "No differences.", differences
}
return output, differences
}
// Assert compares the arguments with the specified objects and fails if
// they do not exactly match.
func (args Arguments) Assert(t TestingT, objects ...interface{}) bool {
// get the differences
diff, diffCount := args.Diff(objects)
if diffCount == 0 {
return true
}
// there are differences... report them...
t.Logf(diff)
t.Errorf("%sArguments do not match.", assert.CallerInfo())
return false
}
// String gets the argument at the specified index. Panics if there is no argument, or
// if the argument is of the wrong type.
//
// If no index is provided, String() returns a complete string representation
// of the arguments.
func (args Arguments) String(indexOrNil ...int) string {
if len(indexOrNil) == 0 {
// normal String() method - return a string representation of the args
var argsStr []string
for _, arg := range args {
argsStr = append(argsStr, fmt.Sprintf("%s", reflect.TypeOf(arg)))
}
return strings.Join(argsStr, ",")
} else if len(indexOrNil) == 1 {
// Index has been specified - get the argument at that index
var index int = indexOrNil[0]
var s string
var ok bool
if s, ok = args.Get(index).(string); !ok {
panic(fmt.Sprintf("assert: arguments: String(%d) failed because object wasn't correct type: %s", index, args.Get(index)))
}
return s
}
panic(fmt.Sprintf("assert: arguments: Wrong number of arguments passed to String. Must be 0 or 1, not %d", len(indexOrNil)))
}
// Int gets the argument at the specified index. Panics if there is no argument, or
// if the argument is of the wrong type.
func (args Arguments) Int(index int) int {
var s int
var ok bool
if s, ok = args.Get(index).(int); !ok {
panic(fmt.Sprintf("assert: arguments: Int(%d) failed because object wasn't correct type: %v", index, args.Get(index)))
}
return s
}
// Error gets the argument at the specified index. Panics if there is no argument, or
// if the argument is of the wrong type.
func (args Arguments) Error(index int) error {
obj := args.Get(index)
var s error
var ok bool
if obj == nil {
return nil
}
if s, ok = obj.(error); !ok {
panic(fmt.Sprintf("assert: arguments: Error(%d) failed because object wasn't correct type: %v", index, args.Get(index)))
}
return s
}
// Bool gets the argument at the specified index. Panics if there is no argument, or
// if the argument is of the wrong type.
func (args Arguments) Bool(index int) bool {
var s bool
var ok bool
if s, ok = args.Get(index).(bool); !ok {
panic(fmt.Sprintf("assert: arguments: Bool(%d) failed because object wasn't correct type: %v", index, args.Get(index)))
}
return s
}

View File

@@ -1,669 +0,0 @@
package mock
import (
"errors"
"github.com/stretchr/testify/assert"
"testing"
)
/*
Test objects
*/
// ExampleInterface represents an example interface.
type ExampleInterface interface {
TheExampleMethod(a, b, c int) (int, error)
}
// TestExampleImplementation is a test implementation of ExampleInterface
type TestExampleImplementation struct {
Mock
}
func (i *TestExampleImplementation) TheExampleMethod(a, b, c int) (int, error) {
args := i.Called(a, b, c)
return args.Int(0), errors.New("Whoops")
}
func (i *TestExampleImplementation) TheExampleMethod2(yesorno bool) {
i.Called(yesorno)
}
type ExampleType struct{}
func (i *TestExampleImplementation) TheExampleMethod3(et *ExampleType) error {
args := i.Called(et)
return args.Error(0)
}
/*
Mock
*/
func Test_Mock_TestData(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
if assert.NotNil(t, mockedService.TestData()) {
mockedService.TestData().Set("something", 123)
assert.Equal(t, 123, mockedService.TestData().Get("something").Data())
}
}
func Test_Mock_On(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
assert.Equal(t, mockedService.On("TheExampleMethod"), &mockedService.Mock)
assert.Equal(t, "TheExampleMethod", mockedService.onMethodName)
}
func Test_Mock_On_WithArgs(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
assert.Equal(t, mockedService.On("TheExampleMethod", 1, 2, 3), &mockedService.Mock)
assert.Equal(t, "TheExampleMethod", mockedService.onMethodName)
assert.Equal(t, 1, mockedService.onMethodArguments[0])
assert.Equal(t, 2, mockedService.onMethodArguments[1])
assert.Equal(t, 3, mockedService.onMethodArguments[2])
}
func Test_Mock_Return(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
assert.Equal(t, mockedService.On("TheExampleMethod", "A", "B", true).Return(1, "two", true), &mockedService.Mock)
// ensure the call was created
if assert.Equal(t, 1, len(mockedService.ExpectedCalls)) {
call := mockedService.ExpectedCalls[0]
assert.Equal(t, "TheExampleMethod", call.Method)
assert.Equal(t, "A", call.Arguments[0])
assert.Equal(t, "B", call.Arguments[1])
assert.Equal(t, true, call.Arguments[2])
assert.Equal(t, 1, call.ReturnArguments[0])
assert.Equal(t, "two", call.ReturnArguments[1])
assert.Equal(t, true, call.ReturnArguments[2])
assert.Equal(t, 0, call.Repeatability)
}
}
func Test_Mock_Return_Once(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Once()
// ensure the call was created
if assert.Equal(t, 1, len(mockedService.ExpectedCalls)) {
call := mockedService.ExpectedCalls[0]
assert.Equal(t, "TheExampleMethod", call.Method)
assert.Equal(t, "A", call.Arguments[0])
assert.Equal(t, "B", call.Arguments[1])
assert.Equal(t, true, call.Arguments[2])
assert.Equal(t, 1, call.ReturnArguments[0])
assert.Equal(t, "two", call.ReturnArguments[1])
assert.Equal(t, true, call.ReturnArguments[2])
assert.Equal(t, 1, call.Repeatability)
}
}
func Test_Mock_Return_Twice(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Twice()
// ensure the call was created
if assert.Equal(t, 1, len(mockedService.ExpectedCalls)) {
call := mockedService.ExpectedCalls[0]
assert.Equal(t, "TheExampleMethod", call.Method)
assert.Equal(t, "A", call.Arguments[0])
assert.Equal(t, "B", call.Arguments[1])
assert.Equal(t, true, call.Arguments[2])
assert.Equal(t, 1, call.ReturnArguments[0])
assert.Equal(t, "two", call.ReturnArguments[1])
assert.Equal(t, true, call.ReturnArguments[2])
assert.Equal(t, 2, call.Repeatability)
}
}
func Test_Mock_Return_Times(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Times(5)
// ensure the call was created
if assert.Equal(t, 1, len(mockedService.ExpectedCalls)) {
call := mockedService.ExpectedCalls[0]
assert.Equal(t, "TheExampleMethod", call.Method)
assert.Equal(t, "A", call.Arguments[0])
assert.Equal(t, "B", call.Arguments[1])
assert.Equal(t, true, call.Arguments[2])
assert.Equal(t, 1, call.ReturnArguments[0])
assert.Equal(t, "two", call.ReturnArguments[1])
assert.Equal(t, true, call.ReturnArguments[2])
assert.Equal(t, 5, call.Repeatability)
}
}
func Test_Mock_Return_Nothing(t *testing.T) {
// make a test impl object
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
assert.Equal(t, mockedService.On("TheExampleMethod", "A", "B", true).Return(), &mockedService.Mock)
// ensure the call was created
if assert.Equal(t, 1, len(mockedService.ExpectedCalls)) {
call := mockedService.ExpectedCalls[0]
assert.Equal(t, "TheExampleMethod", call.Method)
assert.Equal(t, "A", call.Arguments[0])
assert.Equal(t, "B", call.Arguments[1])
assert.Equal(t, true, call.Arguments[2])
assert.Equal(t, 0, len(call.ReturnArguments))
}
}
func Test_Mock_findExpectedCall(t *testing.T) {
m := new(Mock)
m.On("One", 1).Return("one")
m.On("Two", 2).Return("two")
m.On("Two", 3).Return("three")
f, c := m.findExpectedCall("Two", 3)
if assert.Equal(t, 2, f) {
if assert.NotNil(t, c) {
assert.Equal(t, "Two", c.Method)
assert.Equal(t, 3, c.Arguments[0])
assert.Equal(t, "three", c.ReturnArguments[0])
}
}
}
func Test_Mock_findExpectedCall_For_Unknown_Method(t *testing.T) {
m := new(Mock)
m.On("One", 1).Return("one")
m.On("Two", 2).Return("two")
m.On("Two", 3).Return("three")
f, _ := m.findExpectedCall("Two")
assert.Equal(t, -1, f)
}
func Test_Mock_findExpectedCall_Respects_Repeatability(t *testing.T) {
m := new(Mock)
m.On("One", 1).Return("one")
m.On("Two", 2).Return("two").Once()
m.On("Two", 3).Return("three").Twice()
m.On("Two", 3).Return("three").Times(8)
f, c := m.findExpectedCall("Two", 3)
if assert.Equal(t, 2, f) {
if assert.NotNil(t, c) {
assert.Equal(t, "Two", c.Method)
assert.Equal(t, 3, c.Arguments[0])
assert.Equal(t, "three", c.ReturnArguments[0])
}
}
}
func Test_callString(t *testing.T) {
assert.Equal(t, `Method(int,bool,string)`, callString("Method", []interface{}{1, true, "something"}, false))
}
func Test_Mock_Called(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_Called", 1, 2, 3).Return(5, "6", true)
returnArguments := mockedService.Called(1, 2, 3)
if assert.Equal(t, 1, len(mockedService.Calls)) {
assert.Equal(t, "Test_Mock_Called", mockedService.Calls[0].Method)
assert.Equal(t, 1, mockedService.Calls[0].Arguments[0])
assert.Equal(t, 2, mockedService.Calls[0].Arguments[1])
assert.Equal(t, 3, mockedService.Calls[0].Arguments[2])
}
if assert.Equal(t, 3, len(returnArguments)) {
assert.Equal(t, 5, returnArguments[0])
assert.Equal(t, "6", returnArguments[1])
assert.Equal(t, true, returnArguments[2])
}
}
func Test_Mock_Called_For_Bounded_Repeatability(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_Called_For_Bounded_Repeatability", 1, 2, 3).Return(5, "6", true).Once()
mockedService.On("Test_Mock_Called_For_Bounded_Repeatability", 1, 2, 3).Return(-1, "hi", false)
returnArguments1 := mockedService.Called(1, 2, 3)
returnArguments2 := mockedService.Called(1, 2, 3)
if assert.Equal(t, 2, len(mockedService.Calls)) {
assert.Equal(t, "Test_Mock_Called_For_Bounded_Repeatability", mockedService.Calls[0].Method)
assert.Equal(t, 1, mockedService.Calls[0].Arguments[0])
assert.Equal(t, 2, mockedService.Calls[0].Arguments[1])
assert.Equal(t, 3, mockedService.Calls[0].Arguments[2])
assert.Equal(t, "Test_Mock_Called_For_Bounded_Repeatability", mockedService.Calls[1].Method)
assert.Equal(t, 1, mockedService.Calls[1].Arguments[0])
assert.Equal(t, 2, mockedService.Calls[1].Arguments[1])
assert.Equal(t, 3, mockedService.Calls[1].Arguments[2])
}
if assert.Equal(t, 3, len(returnArguments1)) {
assert.Equal(t, 5, returnArguments1[0])
assert.Equal(t, "6", returnArguments1[1])
assert.Equal(t, true, returnArguments1[2])
}
if assert.Equal(t, 3, len(returnArguments2)) {
assert.Equal(t, -1, returnArguments2[0])
assert.Equal(t, "hi", returnArguments2[1])
assert.Equal(t, false, returnArguments2[2])
}
}
func Test_Mock_Called_For_SetTime_Expectation(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("TheExampleMethod", 1, 2, 3).Return(5, "6", true).Times(4)
mockedService.TheExampleMethod(1, 2, 3)
mockedService.TheExampleMethod(1, 2, 3)
mockedService.TheExampleMethod(1, 2, 3)
mockedService.TheExampleMethod(1, 2, 3)
assert.Panics(t, func() {
mockedService.TheExampleMethod(1, 2, 3)
})
}
func Test_Mock_Called_Unexpected(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
// make sure it panics if no expectation was made
assert.Panics(t, func() {
mockedService.Called(1, 2, 3)
}, "Calling unexpected method should panic")
}
func Test_AssertExpectationsForObjects_Helper(t *testing.T) {
var mockedService1 *TestExampleImplementation = new(TestExampleImplementation)
var mockedService2 *TestExampleImplementation = new(TestExampleImplementation)
var mockedService3 *TestExampleImplementation = new(TestExampleImplementation)
mockedService1.On("Test_AssertExpectationsForObjects_Helper", 1).Return()
mockedService2.On("Test_AssertExpectationsForObjects_Helper", 2).Return()
mockedService3.On("Test_AssertExpectationsForObjects_Helper", 3).Return()
mockedService1.Called(1)
mockedService2.Called(2)
mockedService3.Called(3)
assert.True(t, AssertExpectationsForObjects(t, mockedService1.Mock, mockedService2.Mock, mockedService3.Mock))
}
func Test_AssertExpectationsForObjects_Helper_Failed(t *testing.T) {
var mockedService1 *TestExampleImplementation = new(TestExampleImplementation)
var mockedService2 *TestExampleImplementation = new(TestExampleImplementation)
var mockedService3 *TestExampleImplementation = new(TestExampleImplementation)
mockedService1.On("Test_AssertExpectationsForObjects_Helper_Failed", 1).Return()
mockedService2.On("Test_AssertExpectationsForObjects_Helper_Failed", 2).Return()
mockedService3.On("Test_AssertExpectationsForObjects_Helper_Failed", 3).Return()
mockedService1.Called(1)
mockedService3.Called(3)
tt := new(testing.T)
assert.False(t, AssertExpectationsForObjects(tt, mockedService1.Mock, mockedService2.Mock, mockedService3.Mock))
}
func Test_Mock_AssertExpectations(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertExpectations", 1, 2, 3).Return(5, 6, 7)
tt := new(testing.T)
assert.False(t, mockedService.AssertExpectations(tt))
// make the call now
mockedService.Called(1, 2, 3)
// now assert expectations
assert.True(t, mockedService.AssertExpectations(tt))
}
func Test_Mock_AssertExpectationsCustomType(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("TheExampleMethod3", AnythingOfType("*mock.ExampleType")).Return(nil).Once()
tt := new(testing.T)
assert.False(t, mockedService.AssertExpectations(tt))
// make the call now
mockedService.TheExampleMethod3(&ExampleType{})
// now assert expectations
assert.True(t, mockedService.AssertExpectations(tt))
}
func Test_Mock_AssertExpectations_With_Repeatability(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertExpectations_With_Repeatability", 1, 2, 3).Return(5, 6, 7).Twice()
tt := new(testing.T)
assert.False(t, mockedService.AssertExpectations(tt))
// make the call now
mockedService.Called(1, 2, 3)
assert.False(t, mockedService.AssertExpectations(tt))
mockedService.Called(1, 2, 3)
// now assert expectations
assert.True(t, mockedService.AssertExpectations(tt))
}
func Test_Mock_TwoCallsWithDifferentArguments(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_TwoCallsWithDifferentArguments", 1, 2, 3).Return(5, 6, 7)
mockedService.On("Test_Mock_TwoCallsWithDifferentArguments", 4, 5, 6).Return(5, 6, 7)
args1 := mockedService.Called(1, 2, 3)
assert.Equal(t, 5, args1.Int(0))
assert.Equal(t, 6, args1.Int(1))
assert.Equal(t, 7, args1.Int(2))
args2 := mockedService.Called(4, 5, 6)
assert.Equal(t, 5, args2.Int(0))
assert.Equal(t, 6, args2.Int(1))
assert.Equal(t, 7, args2.Int(2))
}
func Test_Mock_AssertNumberOfCalls(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertNumberOfCalls", 1, 2, 3).Return(5, 6, 7)
mockedService.Called(1, 2, 3)
assert.True(t, mockedService.AssertNumberOfCalls(t, "Test_Mock_AssertNumberOfCalls", 1))
mockedService.Called(1, 2, 3)
assert.True(t, mockedService.AssertNumberOfCalls(t, "Test_Mock_AssertNumberOfCalls", 2))
}
func Test_Mock_AssertCalled(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertCalled", 1, 2, 3).Return(5, 6, 7)
mockedService.Called(1, 2, 3)
assert.True(t, mockedService.AssertCalled(t, "Test_Mock_AssertCalled", 1, 2, 3))
}
func Test_Mock_AssertCalled_WithAnythingOfTypeArgument(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertCalled_WithAnythingOfTypeArgument", Anything, Anything, Anything).Return()
mockedService.Called(1, "two", []uint8("three"))
assert.True(t, mockedService.AssertCalled(t, "Test_Mock_AssertCalled_WithAnythingOfTypeArgument", AnythingOfType("int"), AnythingOfType("string"), AnythingOfType("[]uint8")))
}
func Test_Mock_AssertCalled_WithArguments(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertCalled_WithArguments", 1, 2, 3).Return(5, 6, 7)
mockedService.Called(1, 2, 3)
tt := new(testing.T)
assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments", 1, 2, 3))
assert.False(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments", 2, 3, 4))
}
func Test_Mock_AssertCalled_WithArguments_With_Repeatability(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertCalled_WithArguments_With_Repeatability", 1, 2, 3).Return(5, 6, 7).Once()
mockedService.On("Test_Mock_AssertCalled_WithArguments_With_Repeatability", 2, 3, 4).Return(5, 6, 7).Once()
mockedService.Called(1, 2, 3)
mockedService.Called(2, 3, 4)
tt := new(testing.T)
assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 1, 2, 3))
assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 2, 3, 4))
assert.False(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 3, 4, 5))
}
func Test_Mock_AssertNotCalled(t *testing.T) {
var mockedService *TestExampleImplementation = new(TestExampleImplementation)
mockedService.On("Test_Mock_AssertNotCalled", 1, 2, 3).Return(5, 6, 7)
mockedService.Called(1, 2, 3)
assert.True(t, mockedService.AssertNotCalled(t, "Test_Mock_NotCalled"))
}
/*
Arguments helper methods
*/
func Test_Arguments_Get(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.Equal(t, "string", args.Get(0).(string))
assert.Equal(t, 123, args.Get(1).(int))
assert.Equal(t, true, args.Get(2).(bool))
}
func Test_Arguments_Is(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.True(t, args.Is("string", 123, true))
assert.False(t, args.Is("wrong", 456, false))
}
func Test_Arguments_Diff(t *testing.T) {
var args Arguments = []interface{}{"Hello World", 123, true}
var diff string
var count int
diff, count = args.Diff([]interface{}{"Hello World", 456, "false"})
assert.Equal(t, 2, count)
assert.Contains(t, diff, `%!s(int=456) != %!s(int=123)`)
assert.Contains(t, diff, `false != %!s(bool=true)`)
}
func Test_Arguments_Diff_DifferentNumberOfArgs(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
var diff string
var count int
diff, count = args.Diff([]interface{}{"string", 456, "false", "extra"})
assert.Equal(t, 3, count)
assert.Contains(t, diff, `extra != (Missing)`)
}
func Test_Arguments_Diff_WithAnythingArgument(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
var count int
_, count = args.Diff([]interface{}{"string", Anything, true})
assert.Equal(t, 0, count)
}
func Test_Arguments_Diff_WithAnythingArgument_InActualToo(t *testing.T) {
var args Arguments = []interface{}{"string", Anything, true}
var count int
_, count = args.Diff([]interface{}{"string", 123, true})
assert.Equal(t, 0, count)
}
func Test_Arguments_Diff_WithAnythingOfTypeArgument(t *testing.T) {
var args Arguments = []interface{}{"string", AnythingOfType("int"), true}
var count int
_, count = args.Diff([]interface{}{"string", 123, true})
assert.Equal(t, 0, count)
}
func Test_Arguments_Diff_WithAnythingOfTypeArgument_Failing(t *testing.T) {
var args Arguments = []interface{}{"string", AnythingOfType("string"), true}
var count int
var diff string
diff, count = args.Diff([]interface{}{"string", 123, true})
assert.Equal(t, 1, count)
assert.Contains(t, diff, `string != type int - %!s(int=123)`)
}
func Test_Arguments_Assert(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.True(t, args.Assert(t, "string", 123, true))
}
func Test_Arguments_String_Representation(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.Equal(t, `string,int,bool`, args.String())
}
func Test_Arguments_String(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.Equal(t, "string", args.String(0))
}
func Test_Arguments_Error(t *testing.T) {
var err error = errors.New("An Error")
var args Arguments = []interface{}{"string", 123, true, err}
assert.Equal(t, err, args.Error(3))
}
func Test_Arguments_Error_Nil(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true, nil}
assert.Equal(t, nil, args.Error(3))
}
func Test_Arguments_Int(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.Equal(t, 123, args.Int(1))
}
func Test_Arguments_Bool(t *testing.T) {
var args Arguments = []interface{}{"string", 123, true}
assert.Equal(t, true, args.Bool(2))
}

View File

@@ -1,64 +0,0 @@
## Ubuntu (Kylin) 14.04
### Build Dependencies
This installation document assumes Ubuntu 14.04+ on x86-64 platform.
##### Install Git, GCC, yasm
```sh
$ sudo apt-get install git build-essential yasm
```
##### Install Go 1.4+
Download Go 1.4+ from [https://golang.org/dl/](https://golang.org/dl/).
```sh
$ wget https://storage.googleapis.com/golang/go1.4.linux-amd64.tar.gz
$ mkdir -p ${HOME}/bin/
$ mkdir -p ${HOME}/go/
$ tar -C ${HOME}/bin/ -xzf go1.4.linux-amd64.tar.gz
```
##### Setup GOROOT and GOPATH
Add the following exports to your ``~/.bashrc``. Environment variable GOROOT specifies the location of your golang binaries
and GOPATH specifies the location of your project workspace.
```sh
$ export GOROOT=${HOME}/bin/go
$ export GOPATH=${HOME}/go
$ export PATH=$PATH:${HOME}/bin/go/bin:${GOPATH}/bin
```
## OS X (Yosemite) 10.10
### Build Dependencies
This installation document assumes OS X Yosemite 10.10+ on x86-64 platform.
##### Install brew
```sh
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
##### Install Git, Python
```sh
$ brew install git python yasm
```
##### Install Go 1.4+
Install golang binaries using `brew`
```sh
$ brew install go
$ mkdir -p $HOME/go
```
##### Setup GOROOT and GOPATH
Add the following exports to your ``~/.bashrc``. Environment variable GOROOT specifies the location of your golang binaries
and GOPATH specifies the location of your project workspace.
```sh
$ export GOPATH=${HOME}/go
$ export GOVERSION=$(brew list go | head -n 1 | cut -d '/' -f 6)
$ export GOROOT=$(brew --prefix)/Cellar/go/${GOVERSION}/libexec
$ export PATH=$PATH:${GOPATH}/bin
```

37
MAINTAINERS.md Normal file
View File

@@ -0,0 +1,37 @@
# For maintainers only
### Setup your minio Github Repository
Fork [minio upstream](https://github.com/minio/minio/fork) source repository to your own personal repository.
```bash
$ mkdir -p $GOPATH/src/github.com/minio
$ cd $GOPATH/src/github.com/minio
$ git clone https://github.com/$USER_ID/minio
$
```
``minio`` uses [govendor](https://github.com/kardianos/govendor) for its dependency management.
### To manage dependencies
#### Add new dependencies
- Run `go get foo/bar`
- Edit your code to import foo/bar
- Run `govendor add foo/bar` from top-level folder
#### Remove dependencies
- Run `govendor remove foo/bar`
#### Update dependencies
- Run `govendor remove +vendor`
- Run to update the dependent package `go get -u foo/bar`
- Run `govendor add +external`
### Making new releases
`minio` doesn't follow semantic versioning style, `minio` instead uses the release date and time as the release versions.
`make release` will generate new binary into `release` directory.

155
Makefile
View File

@@ -1,63 +1,146 @@
all: getdeps install
LDFLAGS := $(shell go run buildscripts/gen-ldflags.go)
PWD := $(shell pwd)
GOPATH := $(shell go env GOPATH)
BUILD_LDFLAGS := '$(LDFLAGS)'
TAG := latest
checkdeps:
HOST ?= $(shell uname)
CPU ?= $(shell uname -m)
# if no host is identifed (no uname tool)
# we assume a Linux-64bit build
ifeq ($(HOST),)
HOST = Linux
endif
# identify CPU
ifeq ($(CPU), x86_64)
HOST := $(HOST)64
else
ifeq ($(CPU), amd64)
HOST := $(HOST)64
else
ifeq ($(CPU), i686)
HOST := $(HOST)32
endif
endif
endif
#############################################
# now we find out the target OS for
# which we are going to compile in case
# the caller didn't yet define OS himself
ifndef (OS)
ifeq ($(HOST), Linux64)
arch = gcc
else
ifeq ($(HOST), Linux32)
arch = 32
else
ifeq ($(HOST), Darwin64)
arch = clang
else
ifeq ($(HOST), Darwin32)
arch = clang
else
ifeq ($(HOST), FreeBSD64)
arch = gcc
endif
endif
endif
endif
endif
endif
all: install
checks:
@echo "Checking deps:"
@(env bash $(PWD)/buildscripts/checkdeps.sh)
@(env bash $(PWD)/buildscripts/checkgopath.sh)
checkgopath:
@echo "Checking if project is at ${GOPATH}"
@for mcpath in $(echo ${GOPATH} | sed 's/:/\n/g' | grep -v Godeps); do if [ ! -d ${mcpath}/src/github.com/minio/minio ]; then echo "Project not found in ${mcpath}, please follow instructions provided at https://github.com/minio/minio/blob/master/CONTRIBUTING.md#setup-your-minio-github-repository" && exit 1; fi done
getdeps: checks
@go get -u github.com/golang/lint/golint && echo "Installed golint:"
@go get -u github.com/fzipp/gocyclo && echo "Installed gocyclo:"
@go get -u github.com/remyoudompheng/go-misc/deadcode && echo "Installed deadcode:"
@go get -u github.com/client9/misspell/cmd/misspell && echo "Installed misspell:"
@go get -u github.com/gordonklaus/ineffassign && echo "Installed ineffassign:"
getdeps: checkdeps checkgopath
@go get github.com/minio/godep && echo "Installed godep:"
@go get github.com/golang/lint/golint && echo "Installed golint:"
@go get golang.org/x/tools/cmd/vet && echo "Installed vet:"
@go get github.com/fzipp/gocyclo && echo "Installed gocyclo:"
verifiers: getdeps vet fmt lint cyclo
verifiers: vet fmt lint cyclo spelling
vet:
@echo "Running $@:"
@go vet ./...
@GO15VENDOREXPERIMENT=1 go tool vet -all *.go
@GO15VENDOREXPERIMENT=1 go tool vet -all ./pkg
@GO15VENDOREXPERIMENT=1 go tool vet -shadow=true *.go
@GO15VENDOREXPERIMENT=1 go tool vet -shadow=true ./pkg
fmt:
@echo "Running $@:"
@test -z "$$(gofmt -s -l . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)" || \
echo "+ please format Go code with 'gofmt -s'"
@GO15VENDOREXPERIMENT=1 gofmt -s -l *.go
@GO15VENDOREXPERIMENT=1 gofmt -s -l pkg
lint:
@echo "Running $@:"
@test -z "$$(golint ./... | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint *.go
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/golint github.com/minio/minio/pkg...
ineffassign:
@echo "Running $@:"
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/ineffassign .
cyclo:
@echo "Running $@:"
@test -z "$$(gocyclo -over 19 . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 65 *.go
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/gocyclo -over 65 pkg
gomake-all: getdeps verifiers
build: getdeps verifiers $(UI_ASSETS)
deadcode:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/deadcode
spelling:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error *.go
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/misspell -error pkg/**/*
test: build
@echo "Running all minio testing:"
@GO15VENDOREXPERIMENT=1 go test $(GOFLAGS) .
@GO15VENDOREXPERIMENT=1 go test $(GOFLAGS) github.com/minio/minio/pkg...
coverage: build
@echo "Running all coverage for minio:"
@GO15VENDOREXPERIMENT=1 ./buildscripts/go-coverage.sh
gomake-all: build
@echo "Installing minio:"
@go run make.go install
@GO15VENDOREXPERIMENT=1 go build --ldflags $(BUILD_LDFLAGS) -o $(GOPATH)/bin/minio
release: getdeps verifiers
@echo "Installing minio:"
@go run make.go release
@go run make.go install
pkg-add:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/govendor add $(PKG)
godepupdate:
@for i in $(grep ImportPath Godeps/Godeps.json | grep -v minio/minio | cut -f2 -d: | sed -e 's/,//' -e 's/^[ \t]*//' -e 's/[ \t]*$//' -e 's/\"//g'); do godep update $i; done
pkg-update:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/govendor update $(PKG)
pkg-remove:
@GO15VENDOREXPERIMENT=1 ${GOPATH}/bin/govendor remove $(PKG)
pkg-list:
@GO15VENDOREXPERIMENT=1 $(GOPATH)/bin/govendor list
install: gomake-all
save:
@godep save ./...
release: verifiers
@MINIO_RELEASE=RELEASE ./buildscripts/build.sh
restore:
@godep restore
env:
@godep go env
experimental: verifiers
@MINIO_RELEASE=EXPERIMENTAL ./buildscripts/build.sh
clean:
@echo "Cleaning up all the generated files:"
@rm -fv cover.out
@rm -fv pkg/utils/split/TESTPREFIX.*
@rm -fv minio
@find Godeps -name "*.a" -type f -exec rm -vf {} \+
@rm -fv minio minio.test cover.out
@find . -name '*.test' | xargs rm -fv
@rm -rf isa-l
@rm -rf build
@rm -rf release

4
NOTICE
View File

@@ -1,9 +1,9 @@
Minimalist Object Storage, (C) 2014,2015 Minio, Inc.
Minio Cloud Storage, (C) 2014,2015 Minio, Inc.
This product includes software developed at Minio, Inc.
(https://minio.io/).
The Minio project contains unmodified/modified subcomponents too with
separate copyright notices and license terms. Your use of the source
code for the these subcomponents is subject to the terms and conditions
code for these subcomponents is subject to the terms and conditions
of the following licenses.

228
README.md
View File

@@ -1,85 +1,191 @@
## Minio Server (minio) [![Build Status](https://travis-ci.org/minio/minio.svg)](https://travis-ci.org/minio/minio)
# Minio Quickstart Guide [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minio is a minimal object storage server written in Golang and licensed under [Apache license v2](./LICENSE). Minio is compatible with Amazon S3 APIs.
Minio is an object storage server released under Apache License v2.0. It is compatible with Amazon S3 cloud storage service. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB.
## Roadmap
## 1. Download
~~~
Storage Backend:
- Donut: Erasure coded backend.
- Status: Standalone mode complete.
- Memory: In-memory backend.
- Status: Complete.
- Filesystem: Local disk filesystem backend.
- Status: Work in progress.
Minio server is light enough to be bundled with the application stack, similar to NodeJS, Redis and MySQL.
Storage Operations:
- Collective:
- Status: Not started.
| Platform| Architecture | URL|
| ----------| -------- | ------|
|GNU/Linux|64-bit Intel|https://dl.minio.io/server/minio/release/linux-amd64/minio|
||32-bit Intel|https://dl.minio.io/server/minio/release/linux-386/minio|
||32-bit ARM|https://dl.minio.io/server/minio/release/linux-arm/minio|
|Apple OS X|64-bit Intel|https://dl.minio.io/server/minio/release/darwin-amd64/minio|
|Microsoft Windows|64-bit|https://dl.minio.io/server/minio/release/windows-amd64/minio.exe|
||32-bit|https://dl.minio.io/server/minio/release/windows-386/minio.exe|
|FreeBSD|64-bit|https://dl.minio.io/server/minio/release/freebsd-amd64/minio|
Storage Management:
- WebCLI:
- Status: Work in progress.
- Authentication:
- Status: Work in progress.
- Admin Console:
- Status: Work in progress.
- User Console:
- Status: Work in progress.
- Logging:
- Status: Work in progress.
~~~
### Install from Source
### Install
Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://docs.minio.io/docs/how-to-install-golang).
#### GNU/Linux
Download ``minio`` from https://dl.minio.io:9000/updates/2015/Jun/linux-amd64/minio
```sh
$ go get -d github.com/minio/minio
$ cd $GOPATH/src/github.com/minio/minio
$ make
```
## 2. Run Minio Server
### GNU/Linux
```sh
~~~
$ wget https://dl.minio.io:9000/updates/2015/Jun/linux-amd64/minio
$ chmod +x minio
$ ./minio mode memory limit 12GB expire 2h
~~~
#### OS X
$ ./minio --help
$ ./minio server ~/Photos
Download ``minio`` from https://dl.minio.io:9000/updates/2015/Jun/darwin-amd64/minio
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
~~~
$ wget https://dl.minio.io:9000/updates/2015/Jun/darwin-amd64/minio
$ chmod +x minio
$ ./minio mode memory limit 12GB expire 2h
~~~
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
### How to use Minio?
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
[![asciicast](https://asciinema.org/a/21508.png)](https://asciinema.org/a/21508)
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
### Minio Client (mc)
```
``mc`` provides minimal tools to work with Amazon S3 compatible object storage and filesystems. Go to [Minio Client](https://github.com/minio/mc#minio-client-mc-).
### OS X
### Minimal S3 Compatible Client Libraries
- [Golang Library](https://github.com/minio/minio-go)
- [Java Library](https://github.com/minio/minio-java)
- [Nodejs Library](https://github.com/minio/minio-js)
### Join The Community
* Community hangout on Gitter [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
* Ask questions on Quora [![Quora](http://upload.wikimedia.org/wikipedia/commons/thumb/5/57/Quora_logo.svg/55px-Quora_logo.svg.png)](http://www.quora.com/Minio)
```sh
### Contribute
* [Contributors Guide](./CONTRIBUTING.md)
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
### Download
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
-- No releases yet --
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
### Supported platforms
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
| Name | Supported |
| ------------- | ------------- |
| Linux | Yes |
| Mac OSX | Yes |
| Windows | Work in progress |
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
### Microsoft Windows
```sh
C:\Users\Username\Downloads> minio.exe --help
C:\Users\Username\Downloads> minio.exe server D:\Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc.exe config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
### Docker Container
```sh
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio
```
### FreeBSD
```sh
$ chmod 755 minio
$ ./minio --help
$ ./minio server ~/Photos
Endpoint: http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
Browser Access:
http://10.0.0.10:9000 http://127.0.0.1:9000 http://172.17.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.0.0.10:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
```
## 3. Test Minio Server using Minio Browser
Open a web browser and navigate to http://127.0.0.1:9000 to view your buckets on minio server.
![Screenshot](https://github.com/minio/minio/blob/master/docs/screenshots/minio-browser.jpg?raw=true)
## 4. Test Minio Server using `mc`
Install mc from [here](https://docs.minio.io/docs/minio-client-quickstart-guide). Use `mc ls` command to list all the buckets on your minio server.
```sh
$ mc ls myminio/
[2015-08-05 08:13:22 IST] 0B andoria/
[2015-08-05 06:14:26 IST] 0B deflector/
[2015-08-05 08:13:11 IST] 0B ferenginar/
[2016-03-08 14:56:35 IST] 0B jarjarbing/
[2016-01-20 16:07:41 IST] 0B my.minio.io/
```
For more examples please navigate to [Minio Client Complete Guide](https://docs.minio.io/docs/minio-client-complete-guide).
## 5. Explore Further
- [Minio Erasure Code QuickStart Guide](https://docs.minio.io/docs/minio-erasure-code-quickstart-guide)
- [Minio Docker Quickstart Guide](https://docs.minio.io/docs/minio-docker-quickstart-guide)
- [Use `mc` with Minio Server](https://docs.minio.io/docs/minio-client-quickstart-guide)
- [Use `aws-cli` with Minio Server](https://docs.minio.io/docs/aws-cli-with-minio)
- [Use `s3cmd` with Minio Server](https://docs.minio.io/docs/s3cmd-with-minio)
- [Use `minio-go` SDK with Minio Server](https://docs.minio.io/docs/golang-client-quickstart-guide)
## 6. Contribute to Minio Project
Please follow Minio [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)

86
access-key.go Normal file
View File

@@ -0,0 +1,86 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"crypto/rand"
"encoding/base64"
"regexp"
)
// credential container for access and secret keys.
type credential struct {
AccessKeyID string `json:"accessKey"`
SecretAccessKey string `json:"secretKey"`
}
const (
minioAccessID = 20
minioSecretID = 40
)
// isValidSecretKey - validate secret key.
var isValidSecretKey = regexp.MustCompile(`^.{8,40}$`)
// isValidAccessKey - validate access key.
var isValidAccessKey = regexp.MustCompile(`^[a-zA-Z0-9\\-\\.\\_\\~]{5,20}$`)
// mustGenAccessKeys - must generate access credentials.
func mustGenAccessKeys() (creds credential) {
creds, err := genAccessKeys()
fatalIf(err, "Unable to generate access keys.")
return creds
}
// genAccessKeys - generate access credentials.
func genAccessKeys() (credential, error) {
accessKeyID, err := genAccessKeyID()
if err != nil {
return credential{}, err
}
secretAccessKey, err := genSecretAccessKey()
if err != nil {
return credential{}, err
}
creds := credential{
AccessKeyID: string(accessKeyID),
SecretAccessKey: string(secretAccessKey),
}
return creds, nil
}
// genAccessKeyID - generate random alpha numeric value using only uppercase characters
// takes input as size in integer
func genAccessKeyID() ([]byte, error) {
alpha := make([]byte, minioAccessID)
if _, err := rand.Read(alpha); err != nil {
return nil, err
}
for i := 0; i < minioAccessID; i++ {
alpha[i] = alphaNumericTable[alpha[i]%byte(len(alphaNumericTable))]
}
return alpha, nil
}
// genSecretAccessKey - generate random base64 numeric value from a random seed.
func genSecretAccessKey() ([]byte, error) {
rb := make([]byte, minioSecretID)
if _, err := rand.Read(rb); err != nil {
return nil, err
}
return []byte(base64.StdEncoding.EncodeToString(rb))[:minioSecretID], nil
}

41
api-datatypes.go Normal file
View File

@@ -0,0 +1,41 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"encoding/xml"
)
// ObjectIdentifier carries key name for the object to delete.
type ObjectIdentifier struct {
ObjectName string `xml:"Key"`
}
// createBucketConfiguration container for bucket configuration request from client.
// Used for parsing the location from the request body for MakeBucketbucket.
type createBucketLocationConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CreateBucketConfiguration" json:"-"`
Location string `xml:"LocationConstraint"`
}
// DeleteObjectsRequest - xml carrying the object key names which needs to be deleted.
type DeleteObjectsRequest struct {
// Element to enable quiet mode for the request
Quiet bool
// List of objects to be deleted
Objects []ObjectIdentifier `xml:"Object"`
}

628
api-errors.go Normal file
View File

@@ -0,0 +1,628 @@
/*
* Minio Cloud Storage, (C) 2015 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"encoding/xml"
"net/http"
)
// APIError structure
type APIError struct {
Code string
Description string
HTTPStatusCode int
}
// APIErrorResponse - error response format
type APIErrorResponse struct {
XMLName xml.Name `xml:"Error" json:"-"`
Code string
Message string
Key string
BucketName string
Resource string
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
}
// APIErrorCode type of error status.
type APIErrorCode int
// Error codes, non exhaustive list - http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
const (
ErrNone APIErrorCode = iota
ErrAccessDenied
ErrBadDigest
ErrBucketAlreadyExists
ErrEntityTooSmall
ErrEntityTooLarge
ErrIncompleteBody
ErrInternalError
ErrInvalidAccessKeyID
ErrInvalidBucketName
ErrInvalidDigest
ErrInvalidRange
ErrInvalidMaxKeys
ErrInvalidMaxUploads
ErrInvalidMaxParts
ErrInvalidPartNumberMarker
ErrInvalidRequestBody
ErrInvalidCopySource
ErrInvalidCopyDest
ErrInvalidPolicyDocument
ErrMalformedXML
ErrMissingContentLength
ErrMissingContentMD5
ErrMissingRequestBodyError
ErrNoSuchBucket
ErrNoSuchBucketPolicy
ErrNoSuchKey
ErrNoSuchUpload
ErrNotImplemented
ErrPreconditionFailed
ErrRequestTimeTooSkewed
ErrSignatureDoesNotMatch
ErrMethodNotAllowed
ErrInvalidPart
ErrInvalidPartOrder
ErrAuthorizationHeaderMalformed
ErrMalformedPOSTRequest
ErrSignatureVersionNotSupported
ErrBucketNotEmpty
ErrAllAccessDisabled
ErrMalformedPolicy
ErrMissingFields
ErrMissingCredTag
ErrCredMalformed
ErrInvalidRegion
ErrInvalidService
ErrInvalidRequestVersion
ErrMissingSignTag
ErrMissingSignHeadersTag
ErrPolicyAlreadyExpired
ErrMalformedDate
ErrMalformedPresignedDate
ErrMalformedCredentialDate
ErrMalformedCredentialRegion
ErrMalformedExpires
ErrNegativeExpires
ErrAuthHeaderEmpty
ErrExpiredPresignRequest
ErrUnsignedHeaders
ErrMissingDateHeader
ErrInvalidQuerySignatureAlgo
ErrInvalidQueryParams
ErrBucketAlreadyOwnedByYou
// Add new error codes here.
// Bucket notification related errors.
ErrEventNotification
ErrARNNotification
ErrRegionNotification
ErrOverlappingFilterNotification
ErrFilterNameInvalid
ErrFilterNamePrefix
ErrFilterNameSuffix
ErrFilterPrefixValueInvalid
// S3 extended errors.
ErrContentSHA256Mismatch
// Add new extended error codes here.
// Minio extended errors.
ErrReadQuorum
ErrWriteQuorum
ErrStorageFull
ErrObjectExistsAsDirectory
ErrPolicyNesting
ErrInvalidObjectName
// Add new extended error codes here.
// Please open a https://github.com/minio/minio/issues before adding
// new error codes here.
)
// error code to APIError structure, these fields carry respective
// descriptions for all the error responses.
var errorCodeResponse = map[APIErrorCode]APIError{
ErrInvalidCopyDest: {
Code: "InvalidRequest",
Description: "This copy request is illegal because it is trying to copy an object to itself.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidCopySource: {
Code: "InvalidArgument",
Description: "Copy Source must mention the source bucket and key: sourcebucket/sourcekey.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRequestBody: {
Code: "InvalidArgument",
Description: "Body shouldn't be set for this request.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxUploads: {
Code: "InvalidArgument",
Description: "Argument max-uploads must be an integer between 0 and 2147483647",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxKeys: {
Code: "InvalidArgument",
Description: "Argument maxKeys must be an integer between 0 and 2147483647",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidMaxParts: {
Code: "InvalidArgument",
Description: "Argument max-parts must be an integer between 0 and 2147483647",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartNumberMarker: {
Code: "InvalidArgument",
Description: "Argument partNumberMarker must be an integer.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPolicyDocument: {
Code: "InvalidPolicyDocument",
Description: "The content of the form does not meet the conditions specified in the policy document.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAccessDenied: {
Code: "AccessDenied",
Description: "Access Denied.",
HTTPStatusCode: http.StatusForbidden,
},
ErrBadDigest: {
Code: "BadDigest",
Description: "The Content-Md5 you specified did not match what we received.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketAlreadyExists: {
Code: "BucketAlreadyExists",
Description: "The requested bucket name is not available.",
HTTPStatusCode: http.StatusConflict,
},
ErrEntityTooSmall: {
Code: "EntityTooSmall",
Description: "Your proposed upload is smaller than the minimum allowed object size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrEntityTooLarge: {
Code: "EntityTooLarge",
Description: "Your proposed upload exceeds the maximum allowed object size.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrIncompleteBody: {
Code: "IncompleteBody",
Description: "You did not provide the number of bytes specified by the Content-Length HTTP header.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInternalError: {
Code: "InternalError",
Description: "We encountered an internal error, please try again.",
HTTPStatusCode: http.StatusInternalServerError,
},
ErrInvalidAccessKeyID: {
Code: "InvalidAccessKeyID",
Description: "The access key ID you provided does not exist in our records.",
HTTPStatusCode: http.StatusForbidden,
},
ErrInvalidBucketName: {
Code: "InvalidBucketName",
Description: "The specified bucket is not valid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidDigest: {
Code: "InvalidDigest",
Description: "The Content-Md5 you specified is not valid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRange: {
Code: "InvalidRange",
Description: "The requested range is not satisfiable",
HTTPStatusCode: http.StatusRequestedRangeNotSatisfiable,
},
ErrMalformedXML: {
Code: "MalformedXML",
Description: "The XML you provided was not well-formed or did not validate against our published schema.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingContentLength: {
Code: "MissingContentLength",
Description: "You must provide the Content-Length HTTP header.",
HTTPStatusCode: http.StatusLengthRequired,
},
ErrMissingContentMD5: {
Code: "MissingContentMD5",
Description: "Missing required header for this request: Content-Md5.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingRequestBodyError: {
Code: "MissingRequestBodyError",
Description: "Request body is empty.",
HTTPStatusCode: http.StatusLengthRequired,
},
ErrNoSuchBucket: {
Code: "NoSuchBucket",
Description: "The specified bucket does not exist",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchBucketPolicy: {
Code: "NoSuchBucketPolicy",
Description: "The specified bucket does not have a bucket policy.",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchKey: {
Code: "NoSuchKey",
Description: "The specified key does not exist.",
HTTPStatusCode: http.StatusNotFound,
},
ErrNoSuchUpload: {
Code: "NoSuchUpload",
Description: "The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.",
HTTPStatusCode: http.StatusNotFound,
},
ErrNotImplemented: {
Code: "NotImplemented",
Description: "A header you provided implies functionality that is not implemented",
HTTPStatusCode: http.StatusNotImplemented,
},
ErrPreconditionFailed: {
Code: "PreconditionFailed",
Description: "At least one of the pre-conditions you specified did not hold",
HTTPStatusCode: http.StatusPreconditionFailed,
},
ErrRequestTimeTooSkewed: {
Code: "RequestTimeTooSkewed",
Description: "The difference between the request time and the server's time is too large.",
HTTPStatusCode: http.StatusForbidden,
},
ErrSignatureDoesNotMatch: {
Code: "SignatureDoesNotMatch",
Description: "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMethodNotAllowed: {
Code: "MethodNotAllowed",
Description: "The specified method is not allowed against this resource.",
HTTPStatusCode: http.StatusMethodNotAllowed,
},
ErrInvalidPart: {
Code: "InvalidPart",
Description: "One or more of the specified parts could not be found.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidPartOrder: {
Code: "InvalidPartOrder",
Description: "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAuthorizationHeaderMalformed: {
Code: "AuthorizationHeaderMalformed",
Description: "The authorization header is malformed; the region is wrong; expecting 'us-east-1'.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedPOSTRequest: {
Code: "MalformedPOSTRequest",
Description: "The body of your POST request is not well-formed multipart/form-data.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrSignatureVersionNotSupported: {
Code: "InvalidRequest",
Description: "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketNotEmpty: {
Code: "BucketNotEmpty",
Description: "The bucket you tried to delete is not empty.",
HTTPStatusCode: http.StatusConflict,
},
ErrAllAccessDisabled: {
Code: "AllAccessDisabled",
Description: "All access to this bucket has been disabled.",
HTTPStatusCode: http.StatusForbidden,
},
ErrMalformedPolicy: {
Code: "MalformedPolicy",
Description: "Policy has invalid resource.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingFields: {
Code: "MissingFields",
Description: "Missing fields in request.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingCredTag: {
Code: "InvalidRequest",
Description: "Missing Credential field for this request.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrCredMalformed: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; the Credential is mal-formed; expecting \"<YOUR-AKID>/YYYYMMDD/REGION/SERVICE/aws4_request\".",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedDate: {
Code: "MalformedDate",
Description: "Invalid date format header, expected to be in ISO8601, RFC1123 or RFC1123Z time format.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedPresignedDate: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Date must be in the ISO8601 Long Format \"yyyyMMdd'T'HHmmss'Z'\"",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
// right Description: "Error parsing the X-Amz-Credential parameter; incorrect date format \"%s\". This date in the credential must be in the format \"yyyyMMdd\".",
// Need changes to make sure variable messages can be constructed.
ErrMalformedCredentialDate: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; incorrect date format \"%s\". This date in the credential must be in the format \"yyyyMMdd\".",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
// right Description: "Error parsing the X-Amz-Credential parameter; the region 'us-east-' is wrong; expecting 'us-east-1'".
// Need changes to make sure variable messages can be constructed.
ErrMalformedCredentialRegion: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; the region is wrong;",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidRegion: {
Code: "InvalidRegion",
Description: "Region does not match.",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
// right Description: "Error parsing the X-Amz-Credential parameter; incorrect service \"s4\". This endpoint belongs to \"s3\".".
// Need changes to make sure variable messages can be constructed.
ErrInvalidService: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; incorrect service. This endpoint belongs to \"s3\".",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Should contain the invalid param set as seen in https://github.com/minio/minio/issues/2385.
// Description: "Error parsing the X-Amz-Credential parameter; incorrect terminal "aws4_reque". This endpoint uses "aws4_request".
// Need changes to make sure variable messages can be constructed.
ErrInvalidRequestVersion: {
Code: "AuthorizationQueryParametersError",
Description: "Error parsing the X-Amz-Credential parameter; incorrect terminal. This endpoint uses \"aws4_request\".",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSignTag: {
Code: "AccessDenied",
Description: "Signature header missing Signature field.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingSignHeadersTag: {
Code: "InvalidArgument",
Description: "Signature header missing SignedHeaders field.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrPolicyAlreadyExpired: {
Code: "AccessDenied",
Description: "Invalid according to Policy: Policy expired.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMalformedExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires should be a number",
HTTPStatusCode: http.StatusBadRequest,
},
ErrNegativeExpires: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Expires must be non-negative",
HTTPStatusCode: http.StatusBadRequest,
},
ErrAuthHeaderEmpty: {
Code: "InvalidArgument",
Description: "Authorization header is invalid -- one and only one ' ' (space) required.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrMissingDateHeader: {
Code: "AccessDenied",
Description: "AWS authentication requires a valid Date or x-amz-date header",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidQuerySignatureAlgo: {
Code: "AuthorizationQueryParametersError",
Description: "X-Amz-Algorithm only supports \"AWS4-HMAC-SHA256\".",
HTTPStatusCode: http.StatusBadRequest,
},
ErrExpiredPresignRequest: {
Code: "AccessDenied",
Description: "Request has expired",
HTTPStatusCode: http.StatusBadRequest,
},
// FIXME: Actual XML error response also contains the header which missed in lsit of signed header parameters.
ErrUnsignedHeaders: {
Code: "AccessDenied",
Description: "There were headers present in the request which were not signed",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidQueryParams: {
Code: "AuthorizationQueryParametersError",
Description: "Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrBucketAlreadyOwnedByYou: {
Code: "BucketAlreadyOwnedByYou",
Description: "Your previous request to create the named bucket succeeded and you already own it.",
HTTPStatusCode: http.StatusConflict,
},
/// Bucket notification related errors.
ErrEventNotification: {
Code: "InvalidArgument",
Description: "A specified event is not supported for notifications.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrARNNotification: {
Code: "InvalidArgument",
Description: "A specified destination ARN does not exist or is not well-formed. Verify the destination ARN.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrRegionNotification: {
Code: "InvalidArgument",
Description: "A specified destination is in a different region than the bucket. You must use a destination that resides in the same region as the bucket.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrOverlappingFilterNotification: {
Code: "InvalidArgument",
Description: "An object key name filtering rule defined with overlapping prefixes, overlapping suffixes, or overlapping combinations of prefixes and suffixes for the same event types.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrFilterNameInvalid: {
Code: "InvalidArgument",
Description: "filter rule name must be either prefix or suffix",
HTTPStatusCode: http.StatusBadRequest,
},
ErrFilterNamePrefix: {
Code: "InvalidArgument",
Description: "Cannot specify more than one prefix rule in a filter.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrFilterNameSuffix: {
Code: "InvalidArgument",
Description: "Cannot specify more than one suffix rule in a filter.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrFilterPrefixValueInvalid: {
Code: "InvalidArgument",
Description: "prefix rule value cannot exceed 1024 characters",
HTTPStatusCode: http.StatusBadRequest,
},
/// S3 extensions.
ErrContentSHA256Mismatch: {
Code: "XAmzContentSHA256Mismatch",
Description: "The provided 'x-amz-content-sha256' header does not match what was computed.",
HTTPStatusCode: http.StatusBadRequest,
},
/// Minio extensions.
ErrStorageFull: {
Code: "XMinioStorageFull",
Description: "Storage backend has reached its minimum free disk threshold. Please delete few objects to proceed.",
HTTPStatusCode: http.StatusInternalServerError,
},
ErrObjectExistsAsDirectory: {
Code: "XMinioObjectExistsAsDirectory",
Description: "Object name already exists as a directory.",
HTTPStatusCode: http.StatusConflict,
},
ErrReadQuorum: {
Code: "XMinioReadQuorum",
Description: "Multiple disk failures, unable to reconstruct data.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrWriteQuorum: {
Code: "XMinioWriteQuorum",
Description: "Multiple disks failures, unable to write data.",
HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrPolicyNesting: {
Code: "XMinioPolicyNesting",
Description: "Policy nesting conflict has occurred.",
HTTPStatusCode: http.StatusConflict,
},
ErrInvalidObjectName: {
Code: "XMinioInvalidObjectName",
Description: "Object name contains unsupported characters. Unsupported characters are `^*|\\\"",
HTTPStatusCode: http.StatusBadRequest,
},
// Add your error structure here.
}
// toAPIErrorCode - Converts embedded errors. Convenience
// function written to handle all cases where we have known types of
// errors returned by underlying layers.
func toAPIErrorCode(err error) (apiErr APIErrorCode) {
if err == nil {
return ErrNone
}
// Verify if the underlying error is signature mismatch.
switch err {
case errSignatureMismatch:
apiErr = ErrSignatureDoesNotMatch
case errContentSHA256Mismatch:
apiErr = ErrContentSHA256Mismatch
}
if apiErr != ErrNone {
// If there was a match in the above switch case.
return apiErr
}
switch err.(type) {
case StorageFull:
apiErr = ErrStorageFull
case BadDigest:
apiErr = ErrBadDigest
case IncompleteBody:
apiErr = ErrIncompleteBody
case ObjectExistsAsDirectory:
apiErr = ErrObjectExistsAsDirectory
case BucketNameInvalid:
apiErr = ErrInvalidBucketName
case BucketNotFound:
apiErr = ErrNoSuchBucket
case BucketNotEmpty:
apiErr = ErrBucketNotEmpty
case BucketExists:
apiErr = ErrBucketAlreadyOwnedByYou
case ObjectNotFound:
apiErr = ErrNoSuchKey
case ObjectNameInvalid:
apiErr = ErrInvalidObjectName
case InvalidUploadID:
apiErr = ErrNoSuchUpload
case InvalidPart:
apiErr = ErrInvalidPart
case InsufficientWriteQuorum:
apiErr = ErrWriteQuorum
case InsufficientReadQuorum:
apiErr = ErrReadQuorum
case PartTooSmall:
apiErr = ErrEntityTooSmall
default:
apiErr = ErrInternalError
}
return apiErr
}
// getAPIError provides API Error for input API error code.
func getAPIError(code APIErrorCode) APIError {
return errorCodeResponse[code]
}
// getErrorResponse gets in standard error and resource value and
// provides a encodable populated response values
func getAPIErrorResponse(err APIError, resource string) APIErrorResponse {
var data = APIErrorResponse{}
data.Code = err.Code
data.Message = err.Description
if resource != "" {
data.Resource = resource
}
// TODO implement this in future
data.RequestID = "3L137"
data.HostID = "3L137"
return data
}
func getErrMalformedCredentialDate(malformedDateStr string) {
}

89
api-headers.go Normal file
View File

@@ -0,0 +1,89 @@
/*
* Minio Cloud Storage, (C) 2015 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"bytes"
"crypto/rand"
"encoding/xml"
"net/http"
"runtime"
"strconv"
)
//// helpers
// Static alphaNumeric table used for generating unique request ids
var alphaNumericTable = []byte("0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ")
// generateRequestID - Generate request id
func generateRequestID() []byte {
alpha := make([]byte, 16)
rand.Read(alpha)
for i := 0; i < 16; i++ {
alpha[i] = alphaNumericTable[alpha[i]%byte(len(alphaNumericTable))]
}
return alpha
}
// Write http common headers
func setCommonHeaders(w http.ResponseWriter) {
// Set unique request ID for each reply.
w.Header().Set("X-Amz-Request-Id", string(generateRequestID()))
w.Header().Set("Server", ("Minio/" + minioReleaseTag + " (" + runtime.GOOS + "; " + runtime.GOARCH + ")"))
w.Header().Set("Accept-Ranges", "bytes")
}
// Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte {
var bytesBuffer bytes.Buffer
bytesBuffer.WriteString(xml.Header)
e := xml.NewEncoder(&bytesBuffer)
e.Encode(response)
return bytesBuffer.Bytes()
}
// Write object header
func setObjectHeaders(w http.ResponseWriter, objInfo ObjectInfo, contentRange *httpRange) {
// set common headers
setCommonHeaders(w)
// Set content length.
w.Header().Set("Content-Length", strconv.FormatInt(objInfo.Size, 10))
// Set last modified time.
lastModified := objInfo.ModTime.UTC().Format(http.TimeFormat)
w.Header().Set("Last-Modified", lastModified)
// Set Etag if available.
if objInfo.MD5Sum != "" {
w.Header().Set("ETag", "\""+objInfo.MD5Sum+"\"")
}
// Set all other user defined metadata.
for k, v := range objInfo.UserDefined {
w.Header().Set(k, v)
}
// for providing ranged content
if contentRange != nil && contentRange.offsetBegin > -1 {
// Override content-length
w.Header().Set("Content-Length", strconv.FormatInt(contentRange.getLength(), 10))
w.Header().Set("Content-Range", contentRange.String())
w.WriteHeader(http.StatusPartialContent)
}
}

40
api-headers_test.go Normal file
View File

@@ -0,0 +1,40 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"testing"
)
func TestGenerateRequestID(t *testing.T) {
// Ensure that it returns an alphanumeric result of length 16.
var id = generateRequestID()
if len(id) != 16 {
t.Fail()
}
var e rune
for _, char := range id {
e = rune(char)
// Ensure that it is alphanumeric, in this case, between 0-9 and A-Z.
if !(('0' <= e && e <= '9') || ('A' <= e && e <= 'Z')) {
t.Fail()
}
}
}

79
api-resources.go Normal file
View File

@@ -0,0 +1,79 @@
/*
* Minio Cloud Storage, (C) 2015 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"net/url"
"strconv"
)
// Parse bucket url queries
func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType string) {
prefix = values.Get("prefix")
marker = values.Get("marker")
delimiter = values.Get("delimiter")
if values.Get("max-keys") != "" {
maxkeys, _ = strconv.Atoi(values.Get("max-keys"))
} else {
maxkeys = maxObjectList
}
encodingType = values.Get("encoding-type")
return
}
// Parse bucket url queries for ListObjects V2.
func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimiter string, maxkeys int, encodingType string) {
prefix = values.Get("prefix")
startAfter = values.Get("start-after")
delimiter = values.Get("delimiter")
if values.Get("max-keys") != "" {
maxkeys, _ = strconv.Atoi(values.Get("max-keys"))
} else {
maxkeys = maxObjectList
}
encodingType = values.Get("encoding-type")
token = values.Get("continuation-token")
return
}
// Parse bucket url queries for ?uploads
func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadIDMarker, delimiter string, maxUploads int, encodingType string) {
prefix = values.Get("prefix")
keyMarker = values.Get("key-marker")
uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter")
if values.Get("max-uploads") != "" {
maxUploads, _ = strconv.Atoi(values.Get("max-uploads"))
} else {
maxUploads = maxUploadsList
}
encodingType = values.Get("encoding-type")
return
}
// Parse object url queries
func getObjectResources(values url.Values) (uploadID string, partNumberMarker, maxParts int, encodingType string) {
uploadID = values.Get("uploadId")
partNumberMarker, _ = strconv.Atoi(values.Get("part-number-marker"))
if values.Get("max-parts") != "" {
maxParts, _ = strconv.Atoi(values.Get("max-parts"))
} else {
maxParts = maxPartsList
}
encodingType = values.Get("encoding-type")
return
}

54
api-response-multipart.go Normal file
View File

@@ -0,0 +1,54 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// File carries any specific responses constructed/necessary in
// multipart operations.
package main
import "net/http"
// writeErrorResponsePartTooSmall - function is used specifically to
// construct a proper error response during CompleteMultipartUpload
// when one of the parts is < 5MB.
// The requirement comes due to the fact that generic ErrorResponse
// XML doesn't carry the additional fields required to send this
// error. So we construct a new type which lies well within the scope
// of this function.
func writePartSmallErrorResponse(w http.ResponseWriter, r *http.Request, err PartTooSmall) {
// Represents additional fields necessary for ErrPartTooSmall S3 error.
type completeMultipartAPIError struct {
// Proposed size represents uploaded size of the part.
ProposedSize int64
// Minimum size allowed epresents the minimum size allowed per
// part. Defaults to 5MB.
MinSizeAllowed int64
// Part number of the part which is incorrect.
PartNumber int
// ETag of the part which is incorrect.
PartETag string
// Other default XML error responses.
APIErrorResponse
}
// Generate complete multipart error response.
errorResponse := getAPIErrorResponse(getAPIError(toAPIErrorCode(err)), r.URL.Path)
cmpErrResp := completeMultipartAPIError{err.PartSize, int64(5242880), err.PartNumber, err.PartETag, errorResponse}
encodedErrorResponse := encodeResponse(cmpErrResp)
// Write error body
w.Write(encodedErrorResponse)
w.(http.Flusher).Flush()
}
// Add any other multipart specific responses here.

519
api-response.go Normal file
View File

@@ -0,0 +1,519 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"encoding/xml"
"net/http"
"path"
"time"
)
const (
timeFormatAMZ = "2006-01-02T15:04:05.000Z" // Reply date format
maxObjectList = 1000 // Limit number of objects in a listObjectsResponse.
maxUploadsList = 1000 // Limit number of uploads in a listUploadsResponse.
maxPartsList = 1000 // Limit number of parts in a listPartsResponse.
)
// LocationResponse - format for location response.
type LocationResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ LocationConstraint" json:"-"`
Location string `xml:",chardata"`
}
// ListObjectsResponse - format for list objects response.
type ListObjectsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
CommonPrefixes []CommonPrefix
Contents []Object
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Marker string
MaxKeys int
Name string
// When response is truncated (the IsTruncated element value in the response
// is true), you can use the key name in this field as marker in the subsequent
// request to get next set of objects. Server lists objects in alphabetical
// order Note: This element is returned only if you have delimiter request parameter
// specified. If response does not include the NextMaker and it is truncated,
// you can use the value of the last Key in the response as the marker in the
// subsequent request to get the next set of object keys.
NextMarker string
Prefix string
}
// ListObjectsV2Response - format for list objects response.
type ListObjectsV2Response struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult" json:"-"`
CommonPrefixes []CommonPrefix
Contents []Object
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
StartAfter string
MaxKeys int
Name string
// When response is truncated (the IsTruncated element value in the response
// is true), you can use the key name in this field as marker in the subsequent
// request to get next set of objects. Server lists objects in alphabetical
// order Note: This element is returned only if you have delimiter request parameter
// specified. If response does not include the NextMaker and it is truncated,
// you can use the value of the last Key in the response as the marker in the
// subsequent request to get the next set of object keys.
ContinuationToken string
NextContinuationToken string
Prefix string
}
// Part container for part metadata.
type Part struct {
PartNumber int
ETag string
LastModified string
Size int64
}
// ListPartsResponse - format for list parts response.
type ListPartsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListPartsResult" json:"-"`
Bucket string
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
// The class of storage used to store the object.
StorageClass string
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
IsTruncated bool
// List of parts.
Parts []Part `xml:"Part"`
}
// ListMultipartUploadsResponse - format for list multipart uploads response.
type ListMultipartUploadsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListMultipartUploadsResult" json:"-"`
Bucket string
KeyMarker string
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
EncodingType string
MaxUploads int
IsTruncated bool
Uploads []Upload `xml:"Upload"`
Prefix string
Delimiter string
CommonPrefixes []CommonPrefix
}
// ListBucketsResponse - format for list buckets response
type ListBucketsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListAllMyBucketsResult" json:"-"`
// Container for one or more buckets.
Buckets struct {
Buckets []Bucket `xml:"Bucket"`
} // Buckets are nested
Owner Owner
}
// Upload container for in progress multipart upload
type Upload struct {
Key string
UploadID string `xml:"UploadId"`
Initiator Initiator
Owner Owner
StorageClass string
Initiated string
}
// CommonPrefix container for prefix response in ListObjectsResponse
type CommonPrefix struct {
Prefix string
}
// Bucket container for bucket metadata
type Bucket struct {
Name string
CreationDate string // time string of format "2006-01-02T15:04:05.000Z"
}
// Object container for object metadata
type Object struct {
ETag string
Key string
LastModified string // time string of format "2006-01-02T15:04:05.000Z"
Size int64
Owner Owner
// The class of storage used to store the object.
StorageClass string
}
// CopyObjectResponse container returns ETag and LastModified of the
// successfully copied object
type CopyObjectResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CopyObjectResult" json:"-"`
ETag string
LastModified string // time string of format "2006-01-02T15:04:05.000Z"
}
// Initiator inherit from Owner struct, fields are same
type Initiator Owner
// Owner - bucket owner/principal
type Owner struct {
ID string
DisplayName string
}
// InitiateMultipartUploadResponse container for InitiateMultiPartUpload response, provides uploadID to start MultiPart upload
type InitiateMultipartUploadResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ InitiateMultipartUploadResult" json:"-"`
Bucket string
Key string
UploadID string `xml:"UploadId"`
}
// CompleteMultipartUploadResponse container for completed multipart upload response
type CompleteMultipartUploadResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CompleteMultipartUploadResult" json:"-"`
Location string
Bucket string
Key string
ETag string
}
// PostResponse container for completed post upload response
type PostResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ PostResponse" json:"-"`
Location string
Bucket string
Key string
ETag string
}
// DeleteError structure.
type DeleteError struct {
Code string
Message string
Key string
}
// DeleteObjectsResponse container for multiple object deletes.
type DeleteObjectsResponse struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ DeleteResult" json:"-"`
// Collection of all deleted objects
DeletedObjects []ObjectIdentifier `xml:"Deleted,omitempty"`
// Collection of errors deleting certain objects.
Errors []DeleteError `xml:"Error,omitempty"`
}
// getLocation get URL location.
func getLocation(r *http.Request) string {
return path.Clean(r.URL.Path) // Clean any trailing slashes.
}
// getObjectLocation gets the relative URL for an object
func getObjectLocation(bucketName string, key string) string {
return "/" + bucketName + "/" + key
}
// takes an array of Bucketmetadata information for serialization
// input:
// array of bucket metadata
//
// output:
// populated struct that can be serialized to match xml and json api spec output
func generateListBucketsResponse(buckets []BucketInfo) ListBucketsResponse {
var listbuckets []Bucket
var data = ListBucketsResponse{}
var owner = Owner{}
owner.ID = "minio"
owner.DisplayName = "minio"
for _, bucket := range buckets {
var listbucket = Bucket{}
listbucket.Name = bucket.Name
listbucket.CreationDate = bucket.Created.Format(timeFormatAMZ)
listbuckets = append(listbuckets, listbucket)
}
data.Owner = owner
data.Buckets.Buckets = listbuckets
return data
}
// generates an ListObjectsV1 response for the said bucket with other enumerated options.
func generateListObjectsV1Response(bucket, prefix, marker, delimiter string, maxKeys int, resp ListObjectsInfo) ListObjectsResponse {
var contents []Object
var prefixes []CommonPrefix
var owner = Owner{}
var data = ListObjectsResponse{}
owner.ID = "minio"
owner.DisplayName = "minio"
for _, object := range resp.Objects {
var content = Object{}
if object.Name == "" {
continue
}
content.Key = object.Name
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZ)
if object.MD5Sum != "" {
content.ETag = "\"" + object.MD5Sum + "\""
}
content.Size = object.Size
content.StorageClass = "STANDARD"
content.Owner = owner
contents = append(contents, content)
}
// TODO - support EncodingType in xml decoding
data.Name = bucket
data.Contents = contents
data.Prefix = prefix
data.Marker = marker
data.Delimiter = delimiter
data.MaxKeys = maxKeys
data.NextMarker = resp.NextMarker
data.IsTruncated = resp.IsTruncated
for _, prefix := range resp.Prefixes {
var prefixItem = CommonPrefix{}
prefixItem.Prefix = prefix
prefixes = append(prefixes, prefixItem)
}
data.CommonPrefixes = prefixes
return data
}
// generates an ListObjectsV2 response for the said bucket with other enumerated options.
func generateListObjectsV2Response(bucket, prefix, token, startAfter, delimiter string, maxKeys int, resp ListObjectsInfo) ListObjectsV2Response {
var contents []Object
var prefixes []CommonPrefix
var owner = Owner{}
var data = ListObjectsV2Response{}
owner.ID = "minio"
owner.DisplayName = "minio"
for _, object := range resp.Objects {
var content = Object{}
if object.Name == "" {
continue
}
content.Key = object.Name
content.LastModified = object.ModTime.UTC().Format(timeFormatAMZ)
if object.MD5Sum != "" {
content.ETag = "\"" + object.MD5Sum + "\""
}
content.Size = object.Size
content.StorageClass = "STANDARD"
content.Owner = owner
contents = append(contents, content)
}
// TODO - support EncodingType in xml decoding
data.Name = bucket
data.Contents = contents
data.StartAfter = startAfter
data.Delimiter = delimiter
data.Prefix = prefix
data.MaxKeys = maxKeys
data.ContinuationToken = token
data.NextContinuationToken = resp.NextMarker
data.IsTruncated = resp.IsTruncated
for _, prefix := range resp.Prefixes {
var prefixItem = CommonPrefix{}
prefixItem.Prefix = prefix
prefixes = append(prefixes, prefixItem)
}
data.CommonPrefixes = prefixes
return data
}
// generateCopyObjectResponse
func generateCopyObjectResponse(etag string, lastModified time.Time) CopyObjectResponse {
return CopyObjectResponse{
ETag: "\"" + etag + "\"",
LastModified: lastModified.UTC().Format(timeFormatAMZ),
}
}
// generateInitiateMultipartUploadResponse
func generateInitiateMultipartUploadResponse(bucket, key, uploadID string) InitiateMultipartUploadResponse {
return InitiateMultipartUploadResponse{
Bucket: bucket,
Key: key,
UploadID: uploadID,
}
}
// generateCompleteMultipartUploadResponse
func generateCompleteMultpartUploadResponse(bucket, key, location, etag string) CompleteMultipartUploadResponse {
return CompleteMultipartUploadResponse{
Location: location,
Bucket: bucket,
Key: key,
ETag: etag,
}
}
// generateListPartsResult
func generateListPartsResponse(partsInfo ListPartsInfo) ListPartsResponse {
// TODO - support EncodingType in xml decoding
listPartsResponse := ListPartsResponse{}
listPartsResponse.Bucket = partsInfo.Bucket
listPartsResponse.Key = partsInfo.Object
listPartsResponse.UploadID = partsInfo.UploadID
listPartsResponse.StorageClass = "STANDARD"
listPartsResponse.Initiator.ID = "minio"
listPartsResponse.Initiator.DisplayName = "minio"
listPartsResponse.Owner.ID = "minio"
listPartsResponse.Owner.DisplayName = "minio"
listPartsResponse.MaxParts = partsInfo.MaxParts
listPartsResponse.PartNumberMarker = partsInfo.PartNumberMarker
listPartsResponse.IsTruncated = partsInfo.IsTruncated
listPartsResponse.NextPartNumberMarker = partsInfo.NextPartNumberMarker
listPartsResponse.Parts = make([]Part, len(partsInfo.Parts))
for index, part := range partsInfo.Parts {
newPart := Part{}
newPart.PartNumber = part.PartNumber
newPart.ETag = "\"" + part.ETag + "\""
newPart.Size = part.Size
newPart.LastModified = part.LastModified.UTC().Format(timeFormatAMZ)
listPartsResponse.Parts[index] = newPart
}
return listPartsResponse
}
// generateListMultipartUploadsResponse
func generateListMultipartUploadsResponse(bucket string, multipartsInfo ListMultipartsInfo) ListMultipartUploadsResponse {
listMultipartUploadsResponse := ListMultipartUploadsResponse{}
listMultipartUploadsResponse.Bucket = bucket
listMultipartUploadsResponse.Delimiter = multipartsInfo.Delimiter
listMultipartUploadsResponse.IsTruncated = multipartsInfo.IsTruncated
listMultipartUploadsResponse.EncodingType = multipartsInfo.EncodingType
listMultipartUploadsResponse.Prefix = multipartsInfo.Prefix
listMultipartUploadsResponse.KeyMarker = multipartsInfo.KeyMarker
listMultipartUploadsResponse.NextKeyMarker = multipartsInfo.NextKeyMarker
listMultipartUploadsResponse.MaxUploads = multipartsInfo.MaxUploads
listMultipartUploadsResponse.NextUploadIDMarker = multipartsInfo.NextUploadIDMarker
listMultipartUploadsResponse.UploadIDMarker = multipartsInfo.UploadIDMarker
listMultipartUploadsResponse.CommonPrefixes = make([]CommonPrefix, len(multipartsInfo.CommonPrefixes))
for index, commonPrefix := range multipartsInfo.CommonPrefixes {
listMultipartUploadsResponse.CommonPrefixes[index] = CommonPrefix{
Prefix: commonPrefix,
}
}
listMultipartUploadsResponse.Uploads = make([]Upload, len(multipartsInfo.Uploads))
for index, upload := range multipartsInfo.Uploads {
newUpload := Upload{}
newUpload.UploadID = upload.UploadID
newUpload.Key = upload.Object
newUpload.Initiated = upload.Initiated.UTC().Format(timeFormatAMZ)
listMultipartUploadsResponse.Uploads[index] = newUpload
}
return listMultipartUploadsResponse
}
// generate multi objects delete response.
func generateMultiDeleteResponse(quiet bool, deletedObjects []ObjectIdentifier, errs []DeleteError) DeleteObjectsResponse {
deleteResp := DeleteObjectsResponse{}
if !quiet {
deleteResp.DeletedObjects = deletedObjects
}
deleteResp.Errors = errs
return deleteResp
}
// writeSuccessResponse write success headers and response if any.
func writeSuccessResponse(w http.ResponseWriter, response []byte) {
setCommonHeaders(w)
if response == nil {
w.WriteHeader(http.StatusOK)
return
}
w.Write(response)
w.(http.Flusher).Flush()
}
// writeSuccessNoContent write success headers with http status 204
func writeSuccessNoContent(w http.ResponseWriter) {
setCommonHeaders(w)
w.WriteHeader(http.StatusNoContent)
}
// writeErrorRespone write error headers
func writeErrorResponse(w http.ResponseWriter, req *http.Request, errorCode APIErrorCode, resource string) {
error := getAPIError(errorCode)
// set common headers
setCommonHeaders(w)
// write Header
w.WriteHeader(error.HTTPStatusCode)
writeErrorResponseNoHeader(w, req, errorCode, resource)
}
func writeErrorResponseNoHeader(w http.ResponseWriter, req *http.Request, errorCode APIErrorCode, resource string) {
error := getAPIError(errorCode)
// Generate error response.
errorResponse := getAPIErrorResponse(error, resource)
encodedErrorResponse := encodeResponse(errorResponse)
// HEAD should have no body, do not attempt to write to it
if req.Method != "HEAD" {
// write error body
w.Write(encodedErrorResponse)
w.(http.Flusher).Flush()
}
}

94
api-router.go Normal file
View File

@@ -0,0 +1,94 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import router "github.com/gorilla/mux"
// objectAPIHandler implements and provides http handlers for S3 API.
type objectAPIHandlers struct {
ObjectAPI ObjectLayer
}
// registerAPIRouter - registers S3 compatible APIs.
func registerAPIRouter(mux *router.Router, api objectAPIHandlers) {
// API Router
apiRouter := mux.NewRoute().PathPrefix("/").Subrouter()
// Bucket router
bucket := apiRouter.PathPrefix("/{bucket}").Subrouter()
/// Object operations
// HeadObject
bucket.Methods("HEAD").Path("/{object:.+}").HandlerFunc(api.HeadObjectHandler)
// PutObjectPart
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(api.PutObjectPartHandler).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
// ListObjectPxarts
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(api.ListObjectPartsHandler).Queries("uploadId", "{uploadId:.*}")
// CompleteMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(api.CompleteMultipartUploadHandler).Queries("uploadId", "{uploadId:.*}")
// NewMultipartUpload
bucket.Methods("POST").Path("/{object:.+}").HandlerFunc(api.NewMultipartUploadHandler).Queries("uploads", "")
// AbortMultipartUpload
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(api.AbortMultipartUploadHandler).Queries("uploadId", "{uploadId:.*}")
// GetObject
bucket.Methods("GET").Path("/{object:.+}").HandlerFunc(api.GetObjectHandler)
// CopyObject
bucket.Methods("PUT").Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(api.CopyObjectHandler)
// PutObject
bucket.Methods("PUT").Path("/{object:.+}").HandlerFunc(api.PutObjectHandler)
// DeleteObject
bucket.Methods("DELETE").Path("/{object:.+}").HandlerFunc(api.DeleteObjectHandler)
/// Bucket operations
// GetBucketLocation
bucket.Methods("GET").HandlerFunc(api.GetBucketLocationHandler).Queries("location", "")
// GetBucketPolicy
bucket.Methods("GET").HandlerFunc(api.GetBucketPolicyHandler).Queries("policy", "")
// GetBucketNotification
bucket.Methods("GET").HandlerFunc(api.GetBucketNotificationHandler).Queries("notification", "")
// ListenBucketNotification
bucket.Methods("GET").HandlerFunc(api.ListenBucketNotificationHandler).Queries("notificationARN", "{notificationARN:.*}")
// ListMultipartUploads
bucket.Methods("GET").HandlerFunc(api.ListMultipartUploadsHandler).Queries("uploads", "")
// ListObjectsV2
bucket.Methods("GET").HandlerFunc(api.ListObjectsV2Handler).Queries("list-type", "2")
// ListObjectsV1 (Legacy)
bucket.Methods("GET").HandlerFunc(api.ListObjectsV1Handler)
// PutBucketPolicy
bucket.Methods("PUT").HandlerFunc(api.PutBucketPolicyHandler).Queries("policy", "")
// PutBucketNotification
bucket.Methods("PUT").HandlerFunc(api.PutBucketNotificationHandler).Queries("notification", "")
// PutBucket
bucket.Methods("PUT").HandlerFunc(api.PutBucketHandler)
// HeadBucket
bucket.Methods("HEAD").HandlerFunc(api.HeadBucketHandler)
// PostPolicy
bucket.Methods("POST").HeadersRegexp("Content-Type", "multipart/form-data*").HandlerFunc(api.PostPolicyBucketHandler)
// DeleteMultipleObjects
bucket.Methods("POST").HandlerFunc(api.DeleteMultipleObjectsHandler)
// DeleteBucketPolicy
bucket.Methods("DELETE").HandlerFunc(api.DeleteBucketPolicyHandler).Queries("policy", "")
// DeleteBucket
bucket.Methods("DELETE").HandlerFunc(api.DeleteBucketHandler)
/// Root operation
// ListBuckets
apiRouter.Methods("GET").HandlerFunc(api.ListBucketsHandler)
}

39
appveyor.yml Normal file
View File

@@ -0,0 +1,39 @@
# version format
version: "{build}"
# Operating system (build VM template)
os: Windows Server 2012 R2
# Platform.
platform: x64
clone_folder: c:\gopath\src\github.com\minio\minio
# environment variables
environment:
GOPATH: c:\gopath
GO15VENDOREXPERIMENT: 1
# scripts that run after cloning repository
install:
- set PATH=%GOPATH%\bin;c:\go\bin;%PATH%
- go version
- go env
# to run your custom scripts instead of automatic MSBuild
build_script:
- go test -race .
- go test -race github.com/minio/minio/pkg...
- go test -coverprofile=coverage.txt -covermode=atomic
- go run buildscripts/gen-ldflags.go > temp.txt
- set /p BUILD_LDFLAGS=<temp.txt
- go build -ldflags="%BUILD_LDFLAGS%" -o %GOPATH%\bin\minio.exe
after_test:
- bash <(curl -s https://codecov.io/bash)
# to disable automatic tests
test: off
# to disable deployment
deploy: off

203
auth-handler.go Normal file
View File

@@ -0,0 +1,203 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"bytes"
"crypto/md5"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"io/ioutil"
"net/http"
"strings"
)
// Verify if the request http Header "x-amz-content-sha256" == "UNSIGNED-PAYLOAD"
func isRequestUnsignedPayload(r *http.Request) bool {
return r.Header.Get("x-amz-content-sha256") == unsignedPayload
}
// Verify if request has JWT.
func isRequestJWT(r *http.Request) bool {
return strings.HasPrefix(r.Header.Get("Authorization"), jwtAlgorithm)
}
// Verify if request has AWS Signature Version '4'.
func isRequestSignatureV4(r *http.Request) bool {
return strings.HasPrefix(r.Header.Get("Authorization"), signV4Algorithm)
}
// Verify if request has AWS PreSign Version '4'.
func isRequestPresignedSignatureV4(r *http.Request) bool {
_, ok := r.URL.Query()["X-Amz-Credential"]
return ok
}
// Verify if request has AWS Post policy Signature Version '4'.
func isRequestPostPolicySignatureV4(r *http.Request) bool {
return strings.Contains(r.Header.Get("Content-Type"), "multipart/form-data") && r.Method == "POST"
}
// Verify if the request has AWS Streaming Signature Version '4'. This is only valid for 'PUT' operation.
func isRequestSignStreamingV4(r *http.Request) bool {
return r.Header.Get("x-amz-content-sha256") == streamingContentSHA256 && r.Method == "PUT"
}
// Authorization type.
type authType int
// List of all supported auth types.
const (
authTypeUnknown authType = iota
authTypeAnonymous
authTypePresigned
authTypePostPolicy
authTypeStreamingSigned
authTypeSigned
authTypeJWT
)
// Get request authentication type.
func getRequestAuthType(r *http.Request) authType {
if isRequestSignStreamingV4(r) {
return authTypeStreamingSigned
} else if isRequestSignatureV4(r) {
return authTypeSigned
} else if isRequestPresignedSignatureV4(r) {
return authTypePresigned
} else if isRequestJWT(r) {
return authTypeJWT
} else if isRequestPostPolicySignatureV4(r) {
return authTypePostPolicy
} else if _, ok := r.Header["Authorization"]; !ok {
return authTypeAnonymous
}
return authTypeUnknown
}
// sum256 calculate sha256 sum for an input byte array
func sum256(data []byte) []byte {
hash := sha256.New()
hash.Write(data)
return hash.Sum(nil)
}
// sumMD5 calculate md5 sum for an input byte array
func sumMD5(data []byte) []byte {
hash := md5.New()
hash.Write(data)
return hash.Sum(nil)
}
// Verify if request has valid AWS Signature Version '4'.
func isReqAuthenticated(r *http.Request) (s3Error APIErrorCode) {
if r == nil {
return ErrInternalError
}
payload, err := ioutil.ReadAll(r.Body)
if err != nil {
return ErrInternalError
}
// Verify Content-Md5, if payload is set.
if r.Header.Get("Content-Md5") != "" {
if r.Header.Get("Content-Md5") != base64.StdEncoding.EncodeToString(sumMD5(payload)) {
return ErrBadDigest
}
}
// Populate back the payload.
r.Body = ioutil.NopCloser(bytes.NewReader(payload))
validateRegion := true // Validate region.
var sha256sum string
// Skips calculating sha256 on the payload on server,
// if client requested for it.
if skipContentSha256Cksum(r) {
sha256sum = unsignedPayload
} else {
sha256sum = hex.EncodeToString(sum256(payload))
}
if isRequestSignatureV4(r) {
return doesSignatureMatch(sha256sum, r, validateRegion)
} else if isRequestPresignedSignatureV4(r) {
return doesPresignedSignatureMatch(sha256sum, r, validateRegion)
}
return ErrAccessDenied
}
// checkAuth - checks for conditions satisfying the authorization of
// the incoming request. Request should be either Presigned or Signed
// in accordance with AWS S3 Signature V4 requirements. ErrAccessDenied
// is returned for unhandled auth type. Once the auth type is indentified
// request headers and body are used to calculate the signature validating
// the client signature present in request.
func checkAuth(r *http.Request) APIErrorCode {
aType := getRequestAuthType(r)
if aType != authTypePresigned && aType != authTypeSigned {
// For all unhandled auth types return error AccessDenied.
return ErrAccessDenied
}
// Validates the request for both Presigned and Signed.
return isReqAuthenticated(r)
}
// authHandler - handles all the incoming authorization headers and validates them if possible.
type authHandler struct {
handler http.Handler
}
// setAuthHandler to validate authorization header for the incoming request.
func setAuthHandler(h http.Handler) http.Handler {
return authHandler{h}
}
// List of all support S3 auth types.
var supportedS3AuthTypes = []authType{
authTypeAnonymous,
authTypePresigned,
authTypeSigned,
authTypePostPolicy,
authTypeStreamingSigned,
}
// Validate if the authType is valid and supported.
func isSupportedS3AuthType(aType authType) bool {
for _, a := range supportedS3AuthTypes {
if a == aType {
return true
}
}
return false
}
// handler for validating incoming authorization headers.
func (a authHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
aType := getRequestAuthType(r)
if isSupportedS3AuthType(aType) {
// Let top level caller validate for anonymous and known signed requests.
a.handler.ServeHTTP(w, r)
return
} else if aType == authTypeJWT {
// Validate Authorization header if its valid for JWT request.
if !isJWTReqAuthenticated(r) {
w.WriteHeader(http.StatusUnauthorized)
return
}
a.handler.ServeHTTP(w, r)
return
}
writeErrorResponse(w, r, ErrSignatureVersionNotSupported, r.URL.Path)
}

206
auth-handler_test.go Normal file
View File

@@ -0,0 +1,206 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"bytes"
"io"
"net/http"
"testing"
)
// Test all s3 supported auth types.
func TestS3SupportedAuthType(t *testing.T) {
type testCase struct {
authT authType
pass bool
}
// List of all valid and invalid test cases.
testCases := []testCase{
// Test 1 - supported s3 type anonymous.
{
authT: authTypeAnonymous,
pass: true,
},
// Test 2 - supported s3 type presigned.
{
authT: authTypePresigned,
pass: true,
},
// Test 3 - supported s3 type signed.
{
authT: authTypeSigned,
pass: true,
},
// Test 4 - supported s3 type with post policy.
{
authT: authTypePostPolicy,
pass: true,
},
// Test 5 - supported s3 type with streaming signed.
{
authT: authTypeStreamingSigned,
pass: true,
},
// Test 6 - JWT is not supported s3 type.
{
authT: authTypeJWT,
pass: false,
},
// Test 7 - unknown auth header is not supported s3 type.
{
authT: authTypeUnknown,
pass: false,
},
// Test 8 - some new auth type is not supported s3 type.
{
authT: authType(7),
pass: false,
},
}
// Validate all the test cases.
for i, tt := range testCases {
ok := isSupportedS3AuthType(tt.authT)
if ok != tt.pass {
t.Errorf("Test %d:, Expected %t, got %t", i+1, tt.pass, ok)
}
}
}
// TestIsRequestUnsignedPayload - Test validates the Unsigned payload detection logic.
func TestIsRequestUnsignedPayload(t *testing.T) {
testCases := []struct {
inputAmzContentHeader string
expectedResult bool
}{
// Test case - 1.
// Test case with "X-Amz-Content-Sha256" header set to empty value.
{"", false},
// Test case - 2.
// Test case with "X-Amz-Content-Sha256" header set to "UNSIGNED-PAYLOAD"
// The payload is flagged as unsigned When "X-Amz-Content-Sha256" header is set to "UNSIGNED-PAYLOAD".
{"UNSIGNED-PAYLOAD", true},
// Test case - 3.
// set to a random value.
{"abcd", false},
}
// creating an input HTTP request.
// Only the headers are relevant for this particular test.
inputReq, err := http.NewRequest("GET", "http://example.com", nil)
if err != nil {
t.Fatalf("Error initializing input HTTP request: %v", err)
}
for i, testCase := range testCases {
inputReq.Header.Set("X-Amz-Content-Sha256", testCase.inputAmzContentHeader)
actualResult := isRequestUnsignedPayload(inputReq)
if testCase.expectedResult != actualResult {
t.Errorf("Test %d: Expected the result to `%v`, but instead got `%v`", i+1, testCase.expectedResult, actualResult)
}
}
}
// TestIsRequestPresignedSignatureV4 - Test validates the logic for presign signature verision v4 detection.
func TestIsRequestPresignedSignatureV4(t *testing.T) {
testCases := []struct {
inputQueryKey string
inputQueryValue string
expectedResult bool
}{
// Test case - 1.
// Test case with query key ""X-Amz-Credential" set.
{"", "", false},
// Test case - 2.
{"X-Amz-Credential", "", true},
// Test case - 3.
{"X-Amz-Content-Sha256", "", false},
}
for i, testCase := range testCases {
// creating an input HTTP request.
// Only the query parameters are relevant for this particular test.
inputReq, err := http.NewRequest("GET", "http://example.com", nil)
if err != nil {
t.Fatalf("Error initializing input HTTP request: %v", err)
}
q := inputReq.URL.Query()
q.Add(testCase.inputQueryKey, testCase.inputQueryValue)
inputReq.URL.RawQuery = q.Encode()
actualResult := isRequestPresignedSignatureV4(inputReq)
if testCase.expectedResult != actualResult {
t.Errorf("Test %d: Expected the result to `%v`, but instead got `%v`", i+1, testCase.expectedResult, actualResult)
}
}
}
// Provides a fully populated http request instance, fails otherwise.
func mustNewRequest(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req, err := newTestRequest(method, urlStr, contentLength, body)
if err != nil {
t.Fatalf("Unable to initialize new http request %s", err)
}
return req
}
// This is similar to mustNewRequest but additionally the request
// is signed with AWS Signature V4, fails if not able to do so.
func mustNewSignedRequest(method string, urlStr string, contentLength int64, body io.ReadSeeker, t *testing.T) *http.Request {
req := mustNewRequest(method, urlStr, contentLength, body, t)
cred := serverConfig.GetCredential()
if err := signRequest(req, cred.AccessKeyID, cred.SecretAccessKey); err != nil {
t.Fatalf("Unable to inititalized new signed http request %s", err)
}
return req
}
// Tests is requested authenticated function, tests replies for s3 errors.
func TestIsReqAuthenticated(t *testing.T) {
path, err := newTestConfig("us-east-1")
if err != nil {
t.Fatalf("unable initialize config file, %s", err)
}
defer removeAll(path)
serverConfig.SetCredential(credential{"myuser", "mypassword"})
// List of test cases for validating http request authentication.
testCases := []struct {
req *http.Request
s3Error APIErrorCode
}{
// When request is nil, internal error is returned.
{nil, ErrInternalError},
// When request is unsigned, access denied is returned.
{mustNewRequest("GET", "http://localhost:9000", 0, nil, t), ErrAccessDenied},
// When request is properly signed, but has bad Content-MD5 header.
{mustNewSignedRequest("PUT", "http://localhost:9000", 5, bytes.NewReader([]byte("hello")), t), ErrBadDigest},
// When request is properly signed, error is none.
{mustNewSignedRequest("GET", "http://localhost:9000", 0, nil, t), ErrNone},
}
// Validates all testcases.
for _, testCase := range testCases {
if testCase.s3Error == ErrBadDigest {
testCase.req.Header.Set("Content-Md5", "garbage")
}
if s3Error := isReqAuthenticated(testCase.req); s3Error != testCase.s3Error {
t.Fatalf("Unexpected s3error returned wanted %d, got %d", testCase.s3Error, s3Error)
}
}
}

370
benchmark-utils_test.go Normal file
View File

@@ -0,0 +1,370 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"bytes"
"crypto/md5"
"encoding/hex"
"io/ioutil"
"math"
"math/rand"
"strconv"
"testing"
"time"
)
// Benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
var err error
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(bucket)
if err != nil {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
}
}
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// Benchmark utility functions for ObjectLayer.PutObjectPart().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObjectPart benchmark.
func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
var err error
// obtains random bucket name.
bucket := getRandomBucketName()
object := getRandomObjectName()
// create bucket.
err = obj.MakeBucket(bucket)
if err != nil {
b.Fatal(err)
}
objSize := 128 * 1024 * 1024
// PutObjectPart returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum, uploadID string
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for NewMultipartUpload.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
uploadID, err = obj.NewMultipartUpload(bucket, object, metadata)
if err != nil {
b.Fatal(err)
}
var textPartData []byte
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObjectPart starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
// insert the object.
totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize)))
for j := 0; j < totalPartsNR; j++ {
hasher.Reset()
if j < totalPartsNR-1 {
textPartData = textData[j*partSize : (j+1)*partSize-1]
} else {
textPartData = textData[j*partSize:]
}
hasher.Write([]byte(textPartData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
md5Sum, err = obj.PutObjectPart(bucket, object, uploadID, j, int64(len(textPartData)), bytes.NewBuffer(textPartData), metadata["md5Sum"])
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
}
}
}
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectPartBenchmark function.
func benchmarkPutObjectPart(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created on function return.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runPutObjectPartBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and calls the runPutObjectBenchmark function.
func benchmarkPutObject(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created on function return.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runPutObjectBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for put object.
func benchmarkPutObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created on function return.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runPutObjectBenchmarkParallel(b, objLayer, objSize)
}
// Benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
var err error
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(bucket)
if err != nil {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
}
}
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for GetObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ {
var buffer = new(bytes.Buffer)
err = obj.GetObject(bucket, "object"+strconv.Itoa(i%10), 0, int64(objSize), buffer)
if err != nil {
b.Error(err)
}
}
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// randomly picks a character and returns its equivalent byte array.
func getRandomByte() []byte {
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
// seeding the random number generator.
rand.Seed(time.Now().UnixNano())
var b byte
// pick a character randomly.
b = letterBytes[rand.Intn(len(letterBytes))]
return []byte{b}
}
// picks a random byte and repeats it to size bytes.
func generateBytesData(size int) []byte {
// repeat the random character chosen size
return bytes.Repeat(getRandomByte(), size)
}
// creates XL/FS backend setup, obtains the object layer and calls the runGetObjectBenchmark function.
func benchmarkGetObject(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runGetObjectBenchmark(b, objLayer, objSize)
}
// creates XL/FS backend setup, obtains the object layer and runs parallel benchmark for ObjectLayer.GetObject() .
func benchmarkGetObjectParallel(b *testing.B, instanceType string, objSize int) {
// create a temp XL/FS backend.
objLayer, disks, err := makeTestBackend(instanceType)
if err != nil {
b.Fatalf("Failed obtaining Temp Backend: <ERROR> %s", err)
}
// cleaning up the backend by removing all the directories and files created.
defer removeRoots(disks)
// uses *testing.B and the object Layer to run the benchmark.
runGetObjectBenchmarkParallel(b, objLayer, objSize)
}
// Parallel benchmark utility functions for ObjectLayer.PutObject().
// Creates Object layer setup ( MakeBucket ) and then runs the PutObject benchmark.
func runPutObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
var err error
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(bucket)
if err != nil {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
i := 0
for pb.Next() {
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", md5Sum, metadata["md5Sum"])
}
i++
}
})
// Benchmark ends here. Stop timer.
b.StopTimer()
}
// Parallel benchmark utility functions for ObjectLayer.GetObject().
// Creates Object layer setup ( MakeBucket, PutObject) and then runs the benchmark.
func runGetObjectBenchmarkParallel(b *testing.B, obj ObjectLayer, objSize int) {
var err error
// obtains random bucket name.
bucket := getRandomBucketName()
// create bucket.
err = obj.MakeBucket(bucket)
if err != nil {
b.Fatal(err)
}
// PutObject returns md5Sum of the object inserted.
// md5Sum variable is assigned with that value.
var md5Sum string
for i := 0; i < 10; i++ {
// get text data generated for number of bytes equal to object size.
textData := generateBytesData(objSize)
// generate md5sum for the generated data.
// md5sum of the data to written is required as input for PutObject.
// PutObject is the functions which writes the data onto the FS/XL backend.
hasher := md5.New()
hasher.Write([]byte(textData))
metadata := make(map[string]string)
metadata["md5Sum"] = hex.EncodeToString(hasher.Sum(nil))
// insert the object.
md5Sum, err = obj.PutObject(bucket, "object"+strconv.Itoa(i), int64(len(textData)), bytes.NewBuffer(textData), metadata)
if err != nil {
b.Fatal(err)
}
if md5Sum != metadata["md5Sum"] {
b.Fatalf("Write no: %d: Md5Sum mismatch during object write into the bucket: Expected %s, got %s", i+1, md5Sum, metadata["md5Sum"])
}
}
// benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs()
// the actual benchmark for GetObject starts here. Reset the benchmark timer.
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
i := 0
for pb.Next() {
err = obj.GetObject(bucket, "object"+strconv.Itoa(i), 0, int64(objSize), ioutil.Discard)
if err != nil {
b.Error(err)
}
i++
if i == 10 {
i = 0
}
}
})
// Benchmark ends here. Stop timer.
b.StopTimer()
}

View File

@@ -0,0 +1,167 @@
/*
* Minio Cloud Storage, (C) 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"net/http"
"strings"
"github.com/gorilla/mux"
)
// Validate all the ListObjects query arguments, returns an APIErrorCode
// if one of the args do not meet the required conditions.
// Special conditions required by Minio server are as below
// - delimiter if set should be equal to '/', otherwise the request is rejected.
// - marker if set should have a common prefix with 'prefix' param, otherwise
// the request is rejected.
func listObjectsValidateArgs(prefix, marker, delimiter string, maxKeys int) APIErrorCode {
// Max keys cannot be negative.
if maxKeys < 0 {
return ErrInvalidMaxKeys
}
/// Minio special conditions for ListObjects.
// Verify if delimiter is anything other than '/', which we do not support.
if delimiter != "" && delimiter != "/" {
return ErrNotImplemented
}
// Marker is set validate pre-condition.
if marker != "" {
// Marker not common with prefix is not implemented.
if !strings.HasPrefix(marker, prefix) {
return ErrNotImplemented
}
}
// Success.
return ErrNone
}
// ListObjectsV2Handler - GET Bucket (List Objects) Version 2.
// --------------------------
// This implementation of the GET operation returns some or all (up to 1000)
// of the objects in a bucket. You can use the request parameters as selection
// criteria to return a subset of the objects in a bucket.
//
// NOTE: It is recommended that this API to be used for application development.
// Minio continues to support ListObjectsV1 for supporting legacy tools.
func (api objectAPIHandlers) ListObjectsV2Handler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Extract all the listObjectsV2 query params to their native values.
prefix, token, startAfter, delimiter, maxKeys, _ := getListObjectsV2Args(r.URL.Query())
// In ListObjectsV2 'continuation-token' is the marker.
marker := token
// Check if 'continuation-token' is empty.
if token == "" {
// Then we need to use 'start-after' as marker instead.
marker = startAfter
}
// Validate all the query params before beginning to serve the request.
if s3Error := listObjectsValidateArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshalled into S3 compatible XML header.
listObjectsInfo, err := api.ObjectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
response := generateListObjectsV2Response(bucket, prefix, token, startAfter, delimiter, maxKeys, listObjectsInfo)
// Write headers
setCommonHeaders(w)
// Write success response.
writeSuccessResponse(w, encodeResponse(response))
}
// ListObjectsV1Handler - GET Bucket (List Objects) Version 1.
// --------------------------
// This implementation of the GET operation returns some or all (up to 1000)
// of the objects in a bucket. You can use the request parameters as selection
// criteria to return a subset of the objects in a bucket.
//
func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Extract all the litsObjectsV1 query params to their native values.
prefix, marker, delimiter, maxKeys, _ := getListObjectsV1Args(r.URL.Query())
// Validate all the query params before beginning to serve the request.
if s3Error := listObjectsValidateArgs(prefix, marker, delimiter, maxKeys); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
// Inititate a list objects operation based on the input params.
// On success would return back ListObjectsInfo object to be
// marshalled into S3 compatible XML header.
listObjectsInfo, err := api.ObjectAPI.ListObjects(bucket, prefix, marker, delimiter, maxKeys)
if err != nil {
errorIf(err, "Unable to list objects.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
response := generateListObjectsV1Response(bucket, prefix, marker, delimiter, maxKeys, listObjectsInfo)
// Write headers
setCommonHeaders(w)
// Write success response.
writeSuccessResponse(w, encodeResponse(response))
}

447
bucket-handlers.go Normal file
View File

@@ -0,0 +1,447 @@
/*
* Minio Cloud Storage, (C) 2015, 2016 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"encoding/xml"
"io"
"net/http"
"net/url"
"path"
"strings"
mux "github.com/gorilla/mux"
)
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
// Enforces bucket policies for a bucket for a given tatusaction.
func enforceBucketPolicy(bucket string, action string, reqURL *url.URL) (s3Error APIErrorCode) {
if !IsValidBucketName(bucket) {
return ErrInvalidBucketName
}
// Fetch bucket policy, if policy is not set return access denied.
policy := globalBucketPolicies.GetBucketPolicy(bucket)
if policy == nil {
return ErrAccessDenied
}
// Construct resource in 'arn:aws:s3:::examplebucket/object' format.
resource := AWSResourcePrefix + strings.TrimPrefix(reqURL.Path, "/")
// Get conditions for policy verification.
conditions := make(map[string]string)
for queryParam := range reqURL.Query() {
conditions[queryParam] = reqURL.Query().Get(queryParam)
}
// Validate action, resource and conditions with current policy statements.
if !bucketPolicyEvalStatements(action, resource, conditions, policy.Statements) {
return ErrAccessDenied
}
return ErrNone
}
// GetBucketLocationHandler - GET Bucket location.
// -------------------------
// This operation returns bucket location.
func (api objectAPIHandlers) GetBucketLocationHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:GetBucketLocation", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypeSigned, authTypePresigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
if _, err := api.ObjectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Generate response.
encodedSuccessResponse := encodeResponse(LocationResponse{})
// Get current region.
region := serverConfig.GetRegion()
if region != "us-east-1" {
encodedSuccessResponse = encodeResponse(LocationResponse{
Location: region,
})
}
setCommonHeaders(w) // Write headers.
writeSuccessResponse(w, encodedSuccessResponse)
}
// ListMultipartUploadsHandler - GET Bucket (List Multipart uploads)
// -------------------------
// This operation lists in-progress multipart uploads. An in-progress
// multipart upload is a multipart upload that has been initiated,
// using the Initiate Multipart Upload request, but has not yet been
// completed or aborted. This operation returns at most 1,000 multipart
// uploads in the response.
//
func (api objectAPIHandlers) ListMultipartUploadsHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucketMultipartUploads", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, _ := getBucketMultipartResources(r.URL.Query())
if maxUploads < 0 {
writeErrorResponse(w, r, ErrInvalidMaxUploads, r.URL.Path)
return
}
if keyMarker != "" {
// Marker not common with prefix is not implemented.
if !strings.HasPrefix(keyMarker, prefix) {
writeErrorResponse(w, r, ErrNotImplemented, r.URL.Path)
return
}
}
listMultipartsInfo, err := api.ObjectAPI.ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter, maxUploads)
if err != nil {
errorIf(err, "Unable to list multipart uploads.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// generate response
response := generateListMultipartUploadsResponse(bucket, listMultipartsInfo)
encodedSuccessResponse := encodeResponse(response)
// write headers.
setCommonHeaders(w)
// write success response.
writeSuccessResponse(w, encodedSuccessResponse)
}
// ListBucketsHandler - GET Service
// -----------
// This implementation of the GET operation returns a list of all buckets
// owned by the authenticated sender of the request.
func (api objectAPIHandlers) ListBucketsHandler(w http.ResponseWriter, r *http.Request) {
// List buckets does not support bucket policies, no need to enforce it.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
bucketsInfo, err := api.ObjectAPI.ListBuckets()
if err != nil {
errorIf(err, "Unable to list buckets.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Generate response.
response := generateListBucketsResponse(bucketsInfo)
encodedSuccessResponse := encodeResponse(response)
// Write headers.
setCommonHeaders(w)
// Write response.
writeSuccessResponse(w, encodedSuccessResponse)
}
// DeleteMultipleObjectsHandler - deletes multiple objects.
func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:DeleteObject", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
// Content-Length is required and should be non-zero
// http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
if r.ContentLength <= 0 {
writeErrorResponse(w, r, ErrMissingContentLength, r.URL.Path)
return
}
// Content-Md5 is requied should be set
// http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
if _, ok := r.Header["Content-Md5"]; !ok {
writeErrorResponse(w, r, ErrMissingContentMD5, r.URL.Path)
return
}
// Allocate incoming content length bytes.
deleteXMLBytes := make([]byte, r.ContentLength)
// Read incoming body XML bytes.
if _, err := io.ReadFull(r.Body, deleteXMLBytes); err != nil {
errorIf(err, "Unable to read HTTP body.")
writeErrorResponse(w, r, ErrInternalError, r.URL.Path)
return
}
// Unmarshal list of keys to be deleted.
deleteObjects := &DeleteObjectsRequest{}
if err := xml.Unmarshal(deleteXMLBytes, deleteObjects); err != nil {
errorIf(err, "Unable to unmarshal delete objects request XML.")
writeErrorResponse(w, r, ErrMalformedXML, r.URL.Path)
return
}
var deleteErrors []DeleteError
var deletedObjects []ObjectIdentifier
// Loop through all the objects and delete them sequentially.
for _, object := range deleteObjects.Objects {
err := api.ObjectAPI.DeleteObject(bucket, object.ObjectName)
if err == nil {
deletedObjects = append(deletedObjects, ObjectIdentifier{
ObjectName: object.ObjectName,
})
} else {
errorIf(err, "Unable to delete object.")
deleteErrors = append(deleteErrors, DeleteError{
Code: errorCodeResponse[toAPIErrorCode(err)].Code,
Message: errorCodeResponse[toAPIErrorCode(err)].Description,
Key: object.ObjectName,
})
}
}
// Generate response
response := generateMultiDeleteResponse(deleteObjects.Quiet, deletedObjects, deleteErrors)
encodedSuccessResponse := encodeResponse(response)
// Write headers
setCommonHeaders(w)
// Write success response.
writeSuccessResponse(w, encodedSuccessResponse)
}
// PutBucketHandler - PUT Bucket
// ----------
// This implementation of the PUT operation creates a new bucket for authenticated request
func (api objectAPIHandlers) PutBucketHandler(w http.ResponseWriter, r *http.Request) {
// PutBucket does not support policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
// Validate if incoming location constraint is valid, reject
// requests which do not follow valid region requirements.
if s3Error := isValidLocationConstraint(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
}
// Proceed to creating a bucket.
err := api.ObjectAPI.MakeBucket(bucket)
if err != nil {
errorIf(err, "Unable to create a bucket.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Make sure to add Location information here only for bucket
w.Header().Set("Location", getLocation(r))
writeSuccessResponse(w, nil)
}
// PostPolicyBucketHandler - POST policy
// ----------
// This implementation of the POST operation handles object creation with a specified
// signature policy in multipart/form-data
func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *http.Request) {
// Here the parameter is the size of the form data that should
// be loaded in memory, the remaining being put in temporary files.
reader, err := r.MultipartReader()
if err != nil {
errorIf(err, "Unable to initialize multipart reader.")
writeErrorResponse(w, r, ErrMalformedPOSTRequest, r.URL.Path)
return
}
fileBody, fileName, formValues, err := extractPostPolicyFormValues(reader)
if err != nil {
errorIf(err, "Unable to parse form values.")
writeErrorResponse(w, r, ErrMalformedPOSTRequest, r.URL.Path)
return
}
bucket := mux.Vars(r)["bucket"]
formValues["Bucket"] = bucket
object := formValues["Key"]
if fileName != "" && strings.Contains(object, "${filename}") {
// S3 feature to replace ${filename} found in Key form field
// by the filename attribute passed in multipart
object = strings.Replace(object, "${filename}", fileName, -1)
}
// Verify policy signature.
apiErr := doesPolicySignatureMatch(formValues)
if apiErr != ErrNone {
writeErrorResponse(w, r, apiErr, r.URL.Path)
return
}
if apiErr = checkPostPolicy(formValues); apiErr != ErrNone {
writeErrorResponse(w, r, apiErr, r.URL.Path)
return
}
// Save metadata.
metadata := make(map[string]string)
// Nothing to store right now.
md5Sum, err := api.ObjectAPI.PutObject(bucket, object, -1, fileBody, metadata)
if err != nil {
errorIf(err, "Unable to create object.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
if md5Sum != "" {
w.Header().Set("ETag", "\""+md5Sum+"\"")
}
encodedSuccessResponse := encodeResponse(PostResponse{
Location: getObjectLocation(bucket, object), // TODO Full URL is preferred
Bucket: bucket,
Key: object,
ETag: md5Sum,
})
setCommonHeaders(w)
writeSuccessResponse(w, encodedSuccessResponse)
// Fetch object info for notifications.
objInfo, err := api.ObjectAPI.GetObjectInfo(bucket, object)
if err != nil {
errorIf(err, "Unable to fetch object info for \"%s\"", path.Join(bucket, object))
return
}
if eventN.IsBucketNotificationSet(bucket) {
// Notify object created event.
eventNotify(eventData{
Type: ObjectCreatedPost,
Bucket: bucket,
ObjInfo: objInfo,
ReqParams: map[string]string{
"sourceIPAddress": r.RemoteAddr,
},
})
}
}
// HeadBucketHandler - HEAD Bucket
// ----------
// This operation is useful to determine if a bucket exists.
// The operation returns a 200 OK if the bucket exists and you
// have permission to access it. Otherwise, the operation might
// return responses such as 404 Not Found and 403 Forbidden.
func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
bucket := vars["bucket"]
switch getRequestAuthType(r) {
default:
// For all unknown auth types return error.
writeErrorResponse(w, r, ErrAccessDenied, r.URL.Path)
return
case authTypeAnonymous:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
if s3Error := enforceBucketPolicy(bucket, "s3:ListBucket", r.URL); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
case authTypePresigned, authTypeSigned:
if s3Error := isReqAuthenticated(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
}
if _, err := api.ObjectAPI.GetBucketInfo(bucket); err != nil {
errorIf(err, "Unable to fetch bucket info.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
writeSuccessResponse(w, nil)
}
// DeleteBucketHandler - Delete bucket
func (api objectAPIHandlers) DeleteBucketHandler(w http.ResponseWriter, r *http.Request) {
// DeleteBucket does not support bucket policies, use checkAuth to validate signature.
if s3Error := checkAuth(r); s3Error != ErrNone {
writeErrorResponse(w, r, s3Error, r.URL.Path)
return
}
vars := mux.Vars(r)
bucket := vars["bucket"]
// Attempt to delete bucket.
if err := api.ObjectAPI.DeleteBucket(bucket); err != nil {
errorIf(err, "Unable to delete a bucket.")
writeErrorResponse(w, r, toAPIErrorCode(err), r.URL.Path)
return
}
// Delete bucket access policy, if present - ignore any errors.
removeBucketPolicy(bucket, api.ObjectAPI)
// Delete notification config, if present - ignore any errors.
removeNotificationConfig(bucket, api.ObjectAPI)
// Write success response.
writeSuccessNoContent(w)
}

Some files were not shown because too many files have changed in this diff Show More