* [JGIT PATCH 00/21] Push support over SFTP and (encrypted) Amazon S3
@ 2008-06-29 7:59 Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 01/21] Remove unused index files when WalkFetchConnection closes Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
People have often asked on the mailing list if Git can push to
a remote server directly over SFTP, without needing to have Git
installed on the remote system. This mode of operation can be
useful if the remote server is an inexpensive hosting account
and the user wants to publish over HTTP.
With this series jgit can now push directly over sftp:// style
URI without needing Git to be installed on the remote system.
Both the real refs and the dumb transport support files (such
as info/refs) are updated during the push.
A transport for Amazon S3 (http://aws.amazon.com/s3) is also
included. S3 is an inexpensive network based storage system
provided as a commerical service by Amazon. Public data stored
in S3 is available over standard HTTP, making it an inexpensive
hosting provider.
Git repositories pushed to S3 may optionally be transparently
encrypted with an encryption key of the user's choosing, hiding the
repository content from Amazon. An encrypted repository may only be
accessed by jgit, or by downloading it through jets3t, and accessing
the local copy. (This is because all encryption/decryption occurs
on the client.)
I wanted the S3 support just so I could backup my repositories as
easily as I can backup through native Git. Its as simple as:
touch ~/.s3_ident
chmod 600 ~/.s3_ident
cat >~/.s3_ident
accesskey: <AWSAccessKeyId>
secretkey: <AWSSecretAccessKey>
password: <secretpassphrasetoseedencryption>
^D
jgit push amazon-s3://.s3_ident@bucket/repo.git refs/heads/master
The bucket must have already been created with another S3 client.
I consider it outside of the scope of jgit to register buckets.
However the repository name can be any string of your choosing and
the repository will be created on S3 during the first push.
You can also clone this branch off S3 using anonymous HTTP:
git clone http://gitney.s3.amazonaws.com/projects/egit.git
I pushed the above repository with:
touch ~/.s3_pub
chmod 600 ~/.s3_pub
cat >~/.s3_pub
accesskey: <AWSAccessKeyId>
secretkey: <AWSSecretAccessKey>
acl: public
^D
git remote add s3 amazon-s3://.s3_pub@gitney/projects/egit.git
jgit push s3 refs/heads/dumb-push
After writing the full S3 client from scratch and implementing an
encryption scheme that is compatible with jets3t (a popular Java
based S3 client) I've realized that jets3t's encryption scheme is
not as strong as it could be, especially if you can recognize a
pattern in the plain text (such as the format of info/refs, or even
of a pack and pack index). As such the encryption used by jgit is
"eh, ok". This may be an area of improvement in future versions,
but at present should at least stop any sort of casual snooping.
This series is based on `pu` as it requires both Marek's push topic
and my index-v2 topic.
----
Robert Harder (1):
Add Robert Harder's public domain Base64 encoding utility
Shawn O. Pearce (20):
Remove unused index files when WalkFetchConnection closes
Do not show URIish passwords in TransportExceptions
Use PackedObjectInfo as a base class for PackWriter's ObjectToPack
Refactor PackWriter to hold onto the sorted object list
Save the pack checksum after computing it in PackWriter
Allow PackIndexWriter to use any subclass of PackedObjectInfo
Allow PackWriter to create a corresponding index file
Allow PackWriter to prepare object list and compute name before
writing
Remember how a Ref was read in from disk and created
Simplify walker transport ref advertisement setup
Indicate the protocol jgit doesn't support push over
WalkTransport must allow subclasses to implement openPush
Support push over the sftp:// dumb transport
Extract readPackedRefs from TransportSftp for reuse
Specialized byte array output stream for large files
Misc. documentation fixes to Base64 utility
Extract the basic HTTP proxy support to its own class
Create a really simple Amazon S3 REST client
Add client side encryption to Amazon S3 client library
Bidirectional protocol support for Amazon S3
.../tst/org/spearce/jgit/lib/PackWriterTest.java | 8 +-
.../spearce/jgit/transport/PushProcessTest.java | 94 +-
.../spearce/jgit/transport/RefSpecTestCase.java | 26 +-
.../spearce/jgit/errors/TransportException.java | 4 +-
.../src/org/spearce/jgit/lib/PackIndexWriter.java | 6 +-
.../src/org/spearce/jgit/lib/PackWriter.java | 216 ++--
org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java | 91 ++-
.../src/org/spearce/jgit/lib/RefDatabase.java | 23 +-
.../src/org/spearce/jgit/pgm/Main.java | 36 +-
.../src/org/spearce/jgit/transport/AmazonS3.java | 770 ++++++++++
.../spearce/jgit/transport/BasePackConnection.java | 6 +-
.../jgit/transport/BasePackPushConnection.java | 5 +-
.../src/org/spearce/jgit/transport/Transport.java | 3 +
.../spearce/jgit/transport/TransportAmazonS3.java | 319 +++++
.../spearce/jgit/transport/TransportBundle.java | 3 +-
.../org/spearce/jgit/transport/TransportHttp.java | 64 +-
.../org/spearce/jgit/transport/TransportSftp.java | 162 ++-
.../src/org/spearce/jgit/transport/URIish.java | 24 +-
.../org/spearce/jgit/transport/WalkEncryption.java | 188 +++
.../jgit/transport/WalkFetchConnection.java | 2 +
.../spearce/jgit/transport/WalkPushConnection.java | 296 ++++
.../jgit/transport/WalkRemoteObjectDatabase.java | 301 ++++
.../org/spearce/jgit/transport/WalkTransport.java | 8 +-
.../src/org/spearce/jgit/util/Base64.java | 1465 ++++++++++++++++++++
.../src/org/spearce/jgit/util/HttpSupport.java | 165 +++
.../src/org/spearce/jgit/util/TemporaryBuffer.java | 260 ++++
26 files changed, 4250 insertions(+), 295 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/TransportAmazonS3.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/WalkEncryption.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/WalkPushConnection.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/HttpSupport.java
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/TemporaryBuffer.java
^ permalink raw reply [flat|nested] 27+ messages in thread
* [JGIT PATCH 01/21] Remove unused index files when WalkFetchConnection closes
2008-06-29 7:59 [JGIT PATCH 00/21] Push support over SFTP and (encrypted) Amazon S3 Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 02/21] Do not show URIish passwords in TransportExceptions Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
If we downloaded an index but then didn't download the corresponding
pack file we never deleted the index from disk. We should clear any
unused indexes that are left when we terminate the connection.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../jgit/transport/WalkFetchConnection.java | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkFetchConnection.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkFetchConnection.java
index 78116b2..5a21d24 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkFetchConnection.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkFetchConnection.java
@@ -201,6 +201,8 @@ class WalkFetchConnection extends BaseFetchConnection {
@Override
public void close() {
+ for (final RemotePack p : unfetchedPacks)
+ p.tmpIdx.delete();
for (final WalkRemoteObjectDatabase r : remotes)
r.close();
}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 02/21] Do not show URIish passwords in TransportExceptions
2008-06-29 7:59 ` [JGIT PATCH 01/21] Remove unused index files when WalkFetchConnection closes Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 03/21] Use PackedObjectInfo as a base class for PackWriter's ObjectToPack Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
When construction a transport exception the message may be shown
on screen. If a password was in the URIish then we may wind up
showing the user's password, perhaps while there is someone else
looking over the user's shoulder and reading their monitor. By
setting the field to null we avoid displaying it.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../spearce/jgit/errors/TransportException.java | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/errors/TransportException.java b/org.spearce.jgit/src/org/spearce/jgit/errors/TransportException.java
index 4a8e37c..7fbbc5a 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/errors/TransportException.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/errors/TransportException.java
@@ -58,7 +58,7 @@ public class TransportException extends IOException {
* message
*/
public TransportException(final URIish uri, final String s) {
- super(uri + ": " + s);
+ super(uri.setPass(null) + ": " + s);
}
/**
@@ -74,7 +74,7 @@ public class TransportException extends IOException {
*/
public TransportException(final URIish uri, final String s,
final Throwable cause) {
- this(uri + ": " + s, cause);
+ this(uri.setPass(null) + ": " + s, cause);
}
/**
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 03/21] Use PackedObjectInfo as a base class for PackWriter's ObjectToPack
2008-06-29 7:59 ` [JGIT PATCH 02/21] Do not show URIish passwords in TransportExceptions Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 04/21] Refactor PackWriter to hold onto the sorted object list Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
The ObjectId and offset portions of PackedObjectInfo are also
needed by ObjectToPack. By sharing the same base class with
IndexPack we can later abstract the index writing function out
and use it inside of both IndexPack and PackWriter.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/lib/PackWriter.java | 28 +++-----------------
1 files changed, 4 insertions(+), 24 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
index cec2ab0..ccc6cfe 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
@@ -56,6 +56,7 @@ import org.spearce.jgit.revwalk.ObjectWalk;
import org.spearce.jgit.revwalk.RevFlag;
import org.spearce.jgit.revwalk.RevObject;
import org.spearce.jgit.revwalk.RevSort;
+import org.spearce.jgit.transport.PackedObjectInfo;
import org.spearce.jgit.util.CountingOutputStream;
import org.spearce.jgit.util.NB;
@@ -617,7 +618,7 @@ public class PackWriter {
assert !otp.isWritten();
- otp.markWritten(countingOut.getCount());
+ otp.setOffset(countingOut.getCount());
if (otp.isDeltaRepresentation())
writeDeltaObject(otp);
else
@@ -762,13 +763,11 @@ public class PackWriter {
* pack-file and object status.
*
*/
- static class ObjectToPack extends ObjectId {
+ static class ObjectToPack extends PackedObjectInfo {
private ObjectId deltaBase;
private PackedObjectLoader reuseLoader;
- private long offset = -1;
-
private int deltaDepth;
private boolean wantWrite;
@@ -838,26 +837,7 @@ public class PackWriter {
* @return true if object is already written; false otherwise.
*/
boolean isWritten() {
- return offset != -1;
- }
-
- /**
- * @return offset in pack when object has been already written, or -1 if
- * it has not been written yet
- */
- long getOffset() {
- return offset;
- }
-
- /**
- * Mark object as written. This information is used to achieve
- * delta-base precedence in a pack file.
- *
- * @param offset
- * offset where written object starts
- */
- void markWritten(long offset) {
- this.offset = offset;
+ return getOffset() != 0;
}
PackedObjectLoader getReuseLoader() {
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 04/21] Refactor PackWriter to hold onto the sorted object list
2008-06-29 7:59 ` [JGIT PATCH 03/21] Use PackedObjectInfo as a base class for PackWriter's ObjectToPack Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 05/21] Save the pack checksum after computing it in PackWriter Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
When creating pack files we sometimes need the sorted object list
of all contents for two reasons. The first is to get the name of
the pack, which computeName() returns today. The other reason is
to generate a corresponding .idx file for this pack, to support
random access into the data.
Since not all uses of PackWriter require the sorted object name list
(for example streaming the pack to a network socket) the sorting is
done on demand, and cached to avoid needing to do it a second time.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/lib/PackWriter.java | 24 ++++++++++++-------
1 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
index ccc6cfe..0f4cbb4 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
@@ -180,6 +180,8 @@ public class PackWriter {
private final WindowCursor windowCursor = new WindowCursor();
+ private List<ObjectToPack> sortedByName;
+
private boolean reuseDeltas = DEFAULT_REUSE_DELTAS;
private boolean reuseObjects = DEFAULT_REUSE_OBJECTS;
@@ -470,22 +472,26 @@ public class PackWriter {
* @return ObjectId representing SHA-1 name of a pack that was created.
*/
public ObjectId computeName() {
- final ArrayList<ObjectToPack> sorted = new ArrayList<ObjectToPack>(
- objectsMap.size());
- for (List<ObjectToPack> list : objectsLists) {
- for (ObjectToPack otp : list)
- sorted.add(otp);
- }
-
final MessageDigest md = Constants.newMessageDigest();
- Collections.sort(sorted);
- for (ObjectToPack otp : sorted) {
+ for (ObjectToPack otp : sortByName()) {
otp.copyRawTo(buf, 0);
md.update(buf, 0, Constants.OBJECT_ID_LENGTH);
}
return ObjectId.fromRaw(md.digest());
}
+ private List<ObjectToPack> sortByName() {
+ if (sortedByName == null) {
+ sortedByName = new ArrayList<ObjectToPack>(objectsMap.size());
+ for (List<ObjectToPack> list : objectsLists) {
+ for (ObjectToPack otp : list)
+ sortedByName.add(otp);
+ }
+ Collections.sort(sortedByName);
+ }
+ return sortedByName;
+ }
+
private void writePackInternal() throws IOException {
if (reuseDeltas || reuseObjects)
searchForReuse();
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 05/21] Save the pack checksum after computing it in PackWriter
2008-06-29 7:59 ` [JGIT PATCH 04/21] Refactor PackWriter to hold onto the sorted object list Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 06/21] Allow PackIndexWriter to use any subclass of PackedObjectInfo Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
In order to create a matching .idx file for the pack we have
written out we must retain the last 20 bytes of the pack file
so we can include it at the trailing end of the index file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/lib/PackWriter.java | 6 ++++--
1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
index 0f4cbb4..6adb629 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
@@ -182,6 +182,8 @@ public class PackWriter {
private List<ObjectToPack> sortedByName;
+ private byte packcsum[];
+
private boolean reuseDeltas = DEFAULT_REUSE_DELTAS;
private boolean reuseObjects = DEFAULT_REUSE_OBJECTS;
@@ -690,8 +692,8 @@ public class PackWriter {
private void writeChecksum() throws IOException {
out.on(false);
- final byte checksum[] = out.getMessageDigest().digest();
- out.write(checksum);
+ packcsum = out.getMessageDigest().digest();
+ out.write(packcsum);
}
private ObjectWalk setUpWalker(
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 06/21] Allow PackIndexWriter to use any subclass of PackedObjectInfo
2008-06-29 7:59 ` [JGIT PATCH 05/21] Save the pack checksum after computing it in PackWriter Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 07/21] Allow PackWriter to create a corresponding index file Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
Some users of PackIndexWriter may have extended PackedObjectInfo
to store additional implementation specific details, but wish to
pass off their own subclass instances to avoid allocating even
more memory.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/lib/PackIndexWriter.java | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackIndexWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackIndexWriter.java
index 3d0050d..567f099 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackIndexWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackIndexWriter.java
@@ -86,7 +86,7 @@ public abstract class PackIndexWriter {
*/
@SuppressWarnings("fallthrough")
public static PackIndexWriter createOldestPossible(final OutputStream dst,
- final List<PackedObjectInfo> objs) {
+ final List<? extends PackedObjectInfo> objs) {
int version = 1;
LOOP: for (final PackedObjectInfo oe : objs) {
switch (version) {
@@ -137,7 +137,7 @@ public abstract class PackIndexWriter {
protected final byte[] tmp;
/** The entries this writer must pack. */
- protected List<PackedObjectInfo> entries;
+ protected List<? extends PackedObjectInfo> entries;
/** SHA-1 checksum for the entire pack data. */
protected byte[] packChecksum;
@@ -172,7 +172,7 @@ public abstract class PackIndexWriter {
* an error occurred while writing to the output stream, or this
* index format cannot store the object data supplied.
*/
- public void write(final List<PackedObjectInfo> toStore,
+ public void write(final List<? extends PackedObjectInfo> toStore,
final byte[] packDataChecksum) throws IOException {
entries = toStore;
packChecksum = packDataChecksum;
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 07/21] Allow PackWriter to create a corresponding index file
2008-06-29 7:59 ` [JGIT PATCH 06/21] Allow PackIndexWriter to use any subclass of PackedObjectInfo Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 08/21] Allow PackWriter to prepare object list and compute name before writing Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
If we are packing for local use, or are sending the pack file to
a dumb server we must also genrate the matching .idx file so Git
can use random access requests to read object data. Since all
of the necessary information is available in our ObjectToPack we
can just pass off the sorted list to PackIndexWriter.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/lib/PackWriter.java | 38 ++++++++++++++++++++
1 files changed, 38 insertions(+), 0 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
index 6adb629..e346668 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
@@ -192,6 +192,8 @@ public class PackWriter {
private int maxDeltaDepth = DEFAULT_MAX_DELTA_DEPTH;
+ private int outputVersion;
+
private boolean thin;
/**
@@ -348,6 +350,18 @@ public class PackWriter {
}
/**
+ * Set the pack index file format version this instance will create.
+ *
+ * @param version
+ * the version to write. The special version 0 designates the
+ * oldest (most compatible) format available for the objects.
+ * @see PackIndexWriter
+ */
+ public void setIndexVersion(final int version) {
+ outputVersion = version;
+ }
+
+ /**
* Returns objects number in a pack file that was created by this writer.
*
* @return number of objects in pack.
@@ -482,6 +496,30 @@ public class PackWriter {
return ObjectId.fromRaw(md.digest());
}
+ /**
+ * Create an index file to match the pack file just written.
+ * <p>
+ * This method can only be invoked after {@link #writePack(Iterator)} or
+ * {@link #writePack(Collection, Collection, boolean, boolean)} has been
+ * invoked and completed successfully. Writing a corresponding index is an
+ * optional feature that not all pack users may require.
+ *
+ * @param indexStream
+ * output for the index data. Caller is responsible for closing
+ * this stream.
+ * @throws IOException
+ * the index data could not be written to the supplied stream.
+ */
+ public void writeIndex(final OutputStream indexStream) throws IOException {
+ final List<ObjectToPack> list = sortByName();
+ final PackIndexWriter iw;
+ if (outputVersion <= 0)
+ iw = PackIndexWriter.createOldestPossible(indexStream, list);
+ else
+ iw = PackIndexWriter.createVersion(indexStream, outputVersion);
+ iw.write(list, packcsum);
+ }
+
private List<ObjectToPack> sortByName() {
if (sortedByName == null) {
sortedByName = new ArrayList<ObjectToPack>(objectsMap.size());
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 08/21] Allow PackWriter to prepare object list and compute name before writing
2008-06-29 7:59 ` [JGIT PATCH 07/21] Allow PackWriter to create a corresponding index file Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
When we are writing a pack for a dumb protocol transport we want to
get the name of the pack prior to generating its output stream. This
permits us to open the pack stream directly under its final name and
write into it without needing to issue a rename in the middle of the
process. By splitting the pack preparing phase from the writing phase
we are able to request computeName() between the two stages and create
the OutputStream based upon the result.
To improve performance we also now buffer what we write to the pack
stream, if the pack stream was not already a buffered stream.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../tst/org/spearce/jgit/lib/PackWriterTest.java | 8 +-
.../src/org/spearce/jgit/lib/PackWriter.java | 124 +++++++++-----------
.../jgit/transport/BasePackPushConnection.java | 5 +-
3 files changed, 66 insertions(+), 71 deletions(-)
diff --git a/org.spearce.jgit.test/tst/org/spearce/jgit/lib/PackWriterTest.java b/org.spearce.jgit.test/tst/org/spearce/jgit/lib/PackWriterTest.java
index 3f07a57..4dd4b2a 100644
--- a/org.spearce.jgit.test/tst/org/spearce/jgit/lib/PackWriterTest.java
+++ b/org.spearce.jgit.test/tst/org/spearce/jgit/lib/PackWriterTest.java
@@ -87,7 +87,7 @@ public class PackWriterTest extends RepositoryTestCase {
packBase = new File(trash, "tmp_pack");
packFile = new File(trash, "tmp_pack.pack");
indexFile = new File(trash, "tmp_pack.idx");
- writer = new PackWriter(db, cos, new TextProgressMonitor());
+ writer = new PackWriter(db, new TextProgressMonitor());
}
/**
@@ -438,14 +438,16 @@ public class PackWriterTest extends RepositoryTestCase {
final Collection<ObjectId> uninterestings, final boolean thin,
final boolean ignoreMissingUninteresting)
throws MissingObjectException, IOException {
- writer.writePack(interestings, uninterestings, thin,
+ writer.preparePack(interestings, uninterestings, thin,
ignoreMissingUninteresting);
+ writer.writePack(cos);
verifyOpenPack(thin);
}
private void createVerifyOpenPack(final Iterator<RevObject> objectSource)
throws MissingObjectException, IOException {
- writer.writePack(objectSource);
+ writer.preparePack(objectSource);
+ writer.writePack(cos);
verifyOpenPack(false);
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
index e346668..c7aa061 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/PackWriter.java
@@ -37,6 +37,7 @@
package org.spearce.jgit.lib;
+import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.security.DigestOutputStream;
@@ -76,10 +77,10 @@ import org.spearce.jgit.util.NB;
* order of objects in pack</li>
* </ul>
* Typical usage consists of creating instance intended for some pack,
- * configuring options through accessors methods and finally call
- * {@link #writePack(Iterator)} or
- * {@link #writePack(Collection, Collection, boolean, boolean)} with objects
- * specification, to generate a pack stream.
+ * configuring options, preparing the list of objects by calling
+ * {@link #preparePack(Iterator)} or
+ * {@link #preparePack(Collection, Collection, boolean, boolean)}, and finally
+ * producing the stream with {@link #writePack(OutputStream)}.
* </p>
* <p>
* Class provide set of configurable options and {@link ProgressMonitor}
@@ -99,7 +100,7 @@ public class PackWriter {
* Title of {@link ProgressMonitor} task used during counting objects to
* pack.
*
- * @see #writePack(Collection, Collection, boolean, boolean)
+ * @see #preparePack(Collection, Collection, boolean, boolean)
*/
public static final String COUNTING_OBJECTS_PROGRESS = "Counting objects to pack";
@@ -107,8 +108,7 @@ public class PackWriter {
* Title of {@link ProgressMonitor} task used during searching for objects
* reuse or delta reuse.
*
- * @see #writePack(Iterator)
- * @see #writePack(Collection, Collection, boolean, boolean)
+ * @see #writePack(OutputStream)
*/
public static final String SEARCHING_REUSE_PROGRESS = "Searching for delta and object reuse";
@@ -116,8 +116,7 @@ public class PackWriter {
* Title of {@link ProgressMonitor} task used during writing out pack
* (objects)
*
- * @see #writePack(Iterator)
- * @see #writePack(Collection, Collection, boolean, boolean)
+ * @see #writePack(OutputStream)
*/
public static final String WRITING_OBJECTS_PROGRESS = "Writing objects";
@@ -168,9 +167,9 @@ public class PackWriter {
private final Repository db;
- private final DigestOutputStream out;
+ private DigestOutputStream out;
- private final CountingOutputStream countingOut;
+ private CountingOutputStream countingOut;
private final Deflater deflater;
@@ -197,28 +196,22 @@ public class PackWriter {
private boolean thin;
/**
- * Create writer for specified repository, that will write a pack to
- * provided output stream. Objects for packing are specified in
- * {@link #writePack(Iterator)} or
- * {@link #writePack(Collection, Collection, boolean, boolean)}.
+ * Create writer for specified repository.
+ * <p>
+ * Objects for packing are specified in {@link #preparePack(Iterator)} or
+ * {@link #preparePack(Collection, Collection, boolean, boolean)}.
*
* @param repo
* repository where objects are stored.
- * @param out
- * output stream of pack data; no buffering is guaranteed by
- * writer.
* @param monitor
* operations progress monitor, used within
- * {@link #writePack(Iterator)} or
- * {@link #writePack(Collection, Collection, boolean, boolean)}.
+ * {@link #preparePack(Iterator)},
+ * {@link #preparePack(Collection, Collection, boolean, boolean)},
+ * or {@link #writePack(OutputStream)}.
*/
- public PackWriter(final Repository repo, final OutputStream out,
- final ProgressMonitor monitor) {
+ public PackWriter(final Repository repo, final ProgressMonitor monitor) {
this.db = repo;
this.monitor = monitor;
- this.countingOut = new CountingOutputStream(out);
- this.out = new DigestOutputStream(countingOut, Constants
- .newMessageDigest());
this.deflater = new Deflater(db.getConfig().getCore().getCompression());
}
@@ -241,9 +234,9 @@ public class PackWriter {
* use it if possible. Normally, only deltas with base to another object
* existing in set of objects to pack will be used. Exception is however
* thin-pack (see
- * {@link #writePack(Collection, Collection, boolean, boolean)} and
- * {@link #writePack(Iterator)}) where base object must exist on other side
- * machine.
+ * {@link #preparePack(Collection, Collection, boolean, boolean)} and
+ * {@link #preparePack(Iterator)}) where base object must exist on other
+ * side machine.
* <p>
* When raw delta data is directly copied from a pack file, checksum is
* computed to verify data.
@@ -371,8 +364,7 @@ public class PackWriter {
}
/**
- * Write pack to output stream according to current writer configuration for
- * provided source iterator of objects.
+ * Prepare the list of objects to be written to the pack stream.
* <p>
* Iterator <b>exactly</b> determines which objects are included in a pack
* and order they appear in pack (except that objects order by type is not
@@ -391,17 +383,6 @@ public class PackWriter {
* {@link RevFlag#UNINTERESTING} flag. This type of pack is used only for
* transport.
* </p>
- * <p>
- * At first, this method collects and sorts objects to pack, then deltas
- * search is performed if set up accordingly, finally pack stream is
- * written. {@link ProgressMonitor} tasks {@value #SEARCHING_REUSE_PROGRESS}
- * (only if resueDeltas or reuseObjects is enabled) and
- * {@value #WRITING_OBJECTS_PROGRESS} are updated during packing.
- * </p>
- * <p>
- * All reused objects data checksum (Adler32/CRC32) is computed and
- * validated against existing checksum.
- * </p>
*
* @param objectsSource
* iterator of object to store in a pack; order of objects within
@@ -414,20 +395,17 @@ public class PackWriter {
* {@link RevFlag#UNINTERESTING} flag set, it won't be included
* in a pack, but is considered as edge-object for thin-pack.
* @throws IOException
- * when some I/O problem occur during reading objects for pack
- * or writing pack stream.
+ * when some I/O problem occur during reading objects.
*/
- public void writePack(final Iterator<RevObject> objectsSource)
+ public void preparePack(final Iterator<RevObject> objectsSource)
throws IOException {
while (objectsSource.hasNext()) {
addObject(objectsSource.next());
}
- writePackInternal();
}
/**
- * Write pack to output stream according to current writer configuration for
- * provided sets of interesting and uninteresting objects.
+ * Prepare the list of objects to be written to the pack stream.
* <p>
* Basing on these 2 sets, another set of objects to put in a pack file is
* created: this set consists of all objects reachable (ancestors) from
@@ -437,18 +415,6 @@ public class PackWriter {
* Order is consistent with general git in-pack rules: sort by object type,
* recency, path and delta-base first.
* </p>
- * <p>
- * At first, this method collects and sorts objects to pack, then deltas
- * search is performed if set up accordingly, finally pack stream is
- * written. {@link ProgressMonitor} tasks
- * {@value #COUNTING_OBJECTS_PROGRESS}, {@value #SEARCHING_REUSE_PROGRESS}
- * (only if resueDeltas or reuseObjects is enabled) and
- * {@value #WRITING_OBJECTS_PROGRESS} are updated during packing.
- * </p>
- * <p>
- * All reused objects data checksum (Adler32/CRC32) is computed and
- * validated against existing checksum.
- * </p>
*
* @param interestingObjects
* collection of objects to be marked as interesting (start
@@ -468,17 +434,15 @@ public class PackWriter {
* otherwise - non existing uninteresting objects may cause
* {@link MissingObjectException}
* @throws IOException
- * when some I/O problem occur during reading objects for pack
- * or writing pack stream.
+ * when some I/O problem occur during reading objects.
*/
- public void writePack(final Collection<ObjectId> interestingObjects,
+ public void preparePack(final Collection<ObjectId> interestingObjects,
final Collection<ObjectId> uninterestingObjects,
final boolean thin, final boolean ignoreMissingUninteresting)
throws IOException {
ObjectWalk walker = setUpWalker(interestingObjects,
uninterestingObjects, thin, ignoreMissingUninteresting);
findObjectsToPack(walker);
- writePackInternal();
}
/**
@@ -499,8 +463,8 @@ public class PackWriter {
/**
* Create an index file to match the pack file just written.
* <p>
- * This method can only be invoked after {@link #writePack(Iterator)} or
- * {@link #writePack(Collection, Collection, boolean, boolean)} has been
+ * This method can only be invoked after {@link #preparePack(Iterator)} or
+ * {@link #preparePack(Collection, Collection, boolean, boolean)} has been
* invoked and completed successfully. Writing a corresponding index is an
* optional feature that not all pack users may require.
*
@@ -532,10 +496,38 @@ public class PackWriter {
return sortedByName;
}
- private void writePackInternal() throws IOException {
+ /**
+ * Write the prepared pack to the supplied stream.
+ * <p>
+ * At first, this method collects and sorts objects to pack, then deltas
+ * search is performed if set up accordingly, finally pack stream is
+ * written. {@link ProgressMonitor} tasks {@value #SEARCHING_REUSE_PROGRESS}
+ * (only if resueDeltas or reuseObjects is enabled) and
+ * {@value #WRITING_OBJECTS_PROGRESS} are updated during packing.
+ * </p>
+ * <p>
+ * All reused objects data checksum (Adler32/CRC32) is computed and
+ * validated against existing checksum.
+ * </p>
+ *
+ * @param packStream
+ * output stream of pack data. If the stream is not buffered it
+ * will be buffered by the writer. Caller is responsible for
+ * closing the stream.
+ * @throws IOException
+ * an error occurred reading a local object's data to include in
+ * the pack, or writing compressed object data to the output
+ * stream.
+ */
+ public void writePack(OutputStream packStream) throws IOException {
if (reuseDeltas || reuseObjects)
searchForReuse();
+ if (!(packStream instanceof BufferedOutputStream))
+ packStream = new BufferedOutputStream(packStream);
+ countingOut = new CountingOutputStream(packStream);
+ out = new DigestOutputStream(countingOut, Constants.newMessageDigest());
+
monitor.beginTask(WRITING_OBJECTS_PROGRESS, getObjectsNumber());
writeHeader();
writeObjects();
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackPushConnection.java b/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackPushConnection.java
index 217486a..7ae3aa7 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackPushConnection.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackPushConnection.java
@@ -161,7 +161,7 @@ class BasePackPushConnection extends BasePackConnection implements
private void writePack(final Map<String, RemoteRefUpdate> refUpdates,
final ProgressMonitor monitor) throws IOException {
- final PackWriter writer = new PackWriter(local, out, monitor);
+ final PackWriter writer = new PackWriter(local, monitor);
final ArrayList<ObjectId> remoteObjects = new ArrayList<ObjectId>(
getRefs().size());
final ArrayList<ObjectId> newObjects = new ArrayList<ObjectId>(
@@ -172,7 +172,8 @@ class BasePackPushConnection extends BasePackConnection implements
for (final RemoteRefUpdate r : refUpdates.values())
newObjects.add(r.getNewObjectId());
- writer.writePack(newObjects, remoteObjects, thinPack, true);
+ writer.preparePack(newObjects, remoteObjects, thinPack, true);
+ writer.writePack(out);
}
private void readStatusReport(final Map<String, RemoteRefUpdate> refUpdates)
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created
2008-06-29 7:59 ` [JGIT PATCH 08/21] Allow PackWriter to prepare object list and compute name before writing Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 10/21] Simplify walker transport ref advertisement setup Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Robin Rosenberg
0 siblings, 2 replies; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
To efficiently deleted or update a ref we need to know where
it came from when it was read into the process. If the ref
is being updated we can usually just write the loose file,
but if it is being deleted we may need to remove not just a
loose file but also delete it from the packed-refs.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../spearce/jgit/transport/PushProcessTest.java | 94 ++++++++++----------
.../spearce/jgit/transport/RefSpecTestCase.java | 26 +++---
org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java | 91 ++++++++++++++++++-
.../src/org/spearce/jgit/lib/RefDatabase.java | 23 +++--
.../spearce/jgit/transport/BasePackConnection.java | 6 +-
.../spearce/jgit/transport/TransportBundle.java | 3 +-
.../org/spearce/jgit/transport/TransportHttp.java | 6 +-
.../org/spearce/jgit/transport/TransportSftp.java | 18 +++-
8 files changed, 186 insertions(+), 81 deletions(-)
diff --git a/org.spearce.jgit.test/tst/org/spearce/jgit/transport/PushProcessTest.java b/org.spearce.jgit.test/tst/org/spearce/jgit/transport/PushProcessTest.java
index cfea4d5..bb912e6 100644
--- a/org.spearce.jgit.test/tst/org/spearce/jgit/transport/PushProcessTest.java
+++ b/org.spearce.jgit.test/tst/org/spearce/jgit/transport/PushProcessTest.java
@@ -75,65 +75,65 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for fast-forward remote update.
- *
+ *
* @throws IOException
*/
public void testUpdateFastForward() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
testOneUpdateStatus(rru, ref, Status.OK, true);
}
/**
* Test for non fast-forward remote update, when remote object is not known
* to local repository.
- *
+ *
* @throws IOException
*/
public void testUpdateNonFastForwardUnknownObject() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("0000000000000000000000000000000000000001"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("0000000000000000000000000000000000000001"));
testOneUpdateStatus(rru, ref, Status.REJECTED_NONFASTFORWARD, null);
}
/**
* Test for non fast-forward remote update, when remote object is known to
* local repository, but it is not an ancestor of new object.
- *
+ *
* @throws IOException
*/
public void testUpdateNonFastForward() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"ac7e7e44c1885efb472ad54a78327d66bfc4ecef",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
testOneUpdateStatus(rru, ref, Status.REJECTED_NONFASTFORWARD, null);
}
/**
* Test for non fast-forward remote update, when force update flag is set.
- *
+ *
* @throws IOException
*/
public void testUpdateNonFastForwardForced() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"ac7e7e44c1885efb472ad54a78327d66bfc4ecef",
"refs/heads/master", true, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
testOneUpdateStatus(rru, ref, Status.OK, false);
}
/**
* Test for remote ref creation.
- *
+ *
* @throws IOException
*/
public void testUpdateCreateRef() throws IOException {
@@ -145,21 +145,21 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for remote ref deletion.
- *
+ *
* @throws IOException
*/
public void testUpdateDelete() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db, null,
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
testOneUpdateStatus(rru, ref, Status.OK, true);
}
/**
* Test for remote ref deletion (try), when that ref doesn't exist on remote
* repo.
- *
+ *
* @throws IOException
*/
public void testUpdateDeleteNonExisting() throws IOException {
@@ -170,21 +170,21 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for remote ref update, when it is already up to date.
- *
+ *
* @throws IOException
*/
public void testUpdateUpToDate() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
testOneUpdateStatus(rru, ref, Status.UP_TO_DATE, null);
}
/**
* Test for remote ref update with expected remote object.
- *
+ *
* @throws IOException
*/
public void testUpdateExpectedRemote() throws IOException {
@@ -192,15 +192,15 @@ public class PushProcessTest extends RepositoryTestCase {
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, ObjectId
.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
testOneUpdateStatus(rru, ref, Status.OK, true);
}
/**
* Test for remote ref update with expected old object set, when old object
* is not that expected one.
- *
+ *
* @throws IOException
*/
public void testUpdateUnexpectedRemote() throws IOException {
@@ -208,8 +208,8 @@ public class PushProcessTest extends RepositoryTestCase {
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, ObjectId
.fromString("0000000000000000000000000000000000000001"));
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
testOneUpdateStatus(rru, ref, Status.REJECTED_REMOTE_CHANGED, null);
}
@@ -217,7 +217,7 @@ public class PushProcessTest extends RepositoryTestCase {
* Test for remote ref update with expected old object set, when old object
* is not that expected one and force update flag is set (which should have
* lower priority) - shouldn't change behavior.
- *
+ *
* @throws IOException
*/
public void testUpdateUnexpectedRemoteVsForce() throws IOException {
@@ -225,14 +225,14 @@ public class PushProcessTest extends RepositoryTestCase {
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", true, null, ObjectId
.fromString("0000000000000000000000000000000000000001"));
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
testOneUpdateStatus(rru, ref, Status.REJECTED_REMOTE_CHANGED, null);
}
/**
* Test for remote ref udpate, when connection rejects update.
- *
+ *
* @throws IOException
*/
public void testUpdateRejectedByConnection() throws IOException {
@@ -240,22 +240,22 @@ public class PushProcessTest extends RepositoryTestCase {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
testOneUpdateStatus(rru, ref, Status.REJECTED_OTHER_REASON, null);
}
/**
* Test for remote refs updates with mixed cases that shouldn't depend on
* each other.
- *
+ *
* @throws IOException
*/
public void testUpdateMixedCases() throws IOException {
final RemoteRefUpdate rruOk = new RemoteRefUpdate(db, null,
"refs/heads/master", false, null, null);
- final Ref refToChange = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref refToChange = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
final RemoteRefUpdate rruReject = new RemoteRefUpdate(db, null,
"refs/heads/nonexisting", false, null, null);
refUpdates.add(rruOk);
@@ -269,15 +269,15 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for local tracking ref update.
- *
+ *
* @throws IOException
*/
public void testTrackingRefUpdateEnabled() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, "refs/remotes/test/master", null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
refUpdates.add(rru);
advertisedRefs.add(ref);
final PushResult result = executePush();
@@ -290,15 +290,15 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for local tracking ref update disabled.
- *
+ *
* @throws IOException
*/
public void testTrackingRefUpdateDisabled() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
refUpdates.add(rru);
advertisedRefs.add(ref);
final PushResult result = executePush();
@@ -307,15 +307,15 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for local tracking ref update when remote update has failed.
- *
+ *
* @throws IOException
*/
public void testTrackingRefUpdateOnReject() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"ac7e7e44c1885efb472ad54a78327d66bfc4ecef",
"refs/heads/master", false, null, null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("2c349335b7f797072cf729c4f3bb0914ecb6dec9"));
final PushResult result = testOneUpdateStatus(rru, ref,
Status.REJECTED_NONFASTFORWARD, null);
assertTrue(result.getTrackingRefUpdates().isEmpty());
@@ -323,15 +323,15 @@ public class PushProcessTest extends RepositoryTestCase {
/**
* Test for push operation result - that contains expected elements.
- *
+ *
* @throws IOException
*/
public void testPushResult() throws IOException {
final RemoteRefUpdate rru = new RemoteRefUpdate(db,
"2c349335b7f797072cf729c4f3bb0914ecb6dec9",
"refs/heads/master", false, "refs/remotes/test/master", null);
- final Ref ref = new Ref("refs/heads/master", ObjectId
- .fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
+ final Ref ref = new Ref(Ref.Storage.LOOSE, "refs/heads/master",
+ ObjectId.fromString("ac7e7e44c1885efb472ad54a78327d66bfc4ecef"));
refUpdates.add(rru);
advertisedRefs.add(ref);
final PushResult result = executePush();
diff --git a/org.spearce.jgit.test/tst/org/spearce/jgit/transport/RefSpecTestCase.java b/org.spearce.jgit.test/tst/org/spearce/jgit/transport/RefSpecTestCase.java
index 33b3fba..341b4a4 100644
--- a/org.spearce.jgit.test/tst/org/spearce/jgit/transport/RefSpecTestCase.java
+++ b/org.spearce.jgit.test/tst/org/spearce/jgit/transport/RefSpecTestCase.java
@@ -53,12 +53,12 @@ public class RefSpecTestCase extends TestCase {
assertEquals(sn + ":" + sn, rs.toString());
assertEquals(rs, new RefSpec(rs.toString()));
- Ref r = new Ref(sn, null);
+ Ref r = new Ref(Ref.Storage.LOOSE, sn, null);
assertTrue(rs.matchSource(r));
assertTrue(rs.matchDestination(r));
assertSame(rs, rs.expandFromSource(r));
- r = new Ref(sn + "-and-more", null);
+ r = new Ref(Ref.Storage.LOOSE, sn + "-and-more", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
@@ -73,12 +73,12 @@ public class RefSpecTestCase extends TestCase {
assertEquals("+" + sn + ":" + sn, rs.toString());
assertEquals(rs, new RefSpec(rs.toString()));
- Ref r = new Ref(sn, null);
+ Ref r = new Ref(Ref.Storage.LOOSE, sn, null);
assertTrue(rs.matchSource(r));
assertTrue(rs.matchDestination(r));
assertSame(rs, rs.expandFromSource(r));
- r = new Ref(sn + "-and-more", null);
+ r = new Ref(Ref.Storage.LOOSE, sn + "-and-more", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
@@ -93,12 +93,12 @@ public class RefSpecTestCase extends TestCase {
assertEquals(sn, rs.toString());
assertEquals(rs, new RefSpec(rs.toString()));
- Ref r = new Ref(sn, null);
+ Ref r = new Ref(Ref.Storage.LOOSE, sn, null);
assertTrue(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
assertSame(rs, rs.expandFromSource(r));
- r = new Ref(sn + "-and-more", null);
+ r = new Ref(Ref.Storage.LOOSE, sn + "-and-more", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
@@ -113,12 +113,12 @@ public class RefSpecTestCase extends TestCase {
assertEquals("+" + sn, rs.toString());
assertEquals(rs, new RefSpec(rs.toString()));
- Ref r = new Ref(sn, null);
+ Ref r = new Ref(Ref.Storage.LOOSE, sn, null);
assertTrue(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
assertSame(rs, rs.expandFromSource(r));
- r = new Ref(sn + "-and-more", null);
+ r = new Ref(Ref.Storage.LOOSE, sn + "-and-more", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
@@ -133,12 +133,12 @@ public class RefSpecTestCase extends TestCase {
assertEquals(":" + sn, rs.toString());
assertEquals(rs, new RefSpec(rs.toString()));
- Ref r = new Ref(sn, null);
+ Ref r = new Ref(Ref.Storage.LOOSE, sn, null);
assertFalse(rs.matchSource(r));
assertTrue(rs.matchDestination(r));
assertSame(rs, rs.expandFromSource(r));
- r = new Ref(sn + "-and-more", null);
+ r = new Ref(Ref.Storage.LOOSE, sn + "-and-more", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
@@ -157,7 +157,7 @@ public class RefSpecTestCase extends TestCase {
Ref r;
RefSpec expanded;
- r = new Ref("refs/heads/master", null);
+ r = new Ref(Ref.Storage.LOOSE, "refs/heads/master", null);
assertTrue(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
expanded = rs.expandFromSource(r);
@@ -167,11 +167,11 @@ public class RefSpecTestCase extends TestCase {
assertEquals(r.getName(), expanded.getSource());
assertEquals("refs/remotes/origin/master", expanded.getDestination());
- r = new Ref("refs/remotes/origin/next", null);
+ r = new Ref(Ref.Storage.LOOSE, "refs/remotes/origin/next", null);
assertFalse(rs.matchSource(r));
assertTrue(rs.matchDestination(r));
- r = new Ref("refs/tags/v1.0", null);
+ r = new Ref(Ref.Storage.LOOSE, "refs/tags/v1.0", null);
assertFalse(rs.matchSource(r));
assertFalse(rs.matchDestination(r));
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java b/org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java
index b7e361f..5b0e13f 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/Ref.java
@@ -45,6 +45,74 @@ package org.spearce.jgit.lib;
* commit, annotated tag, ...).
*/
public class Ref {
+ /** Location where a {@link Ref} is stored. */
+ public static enum Storage {
+ /**
+ * The ref does not exist yet, updating it may create it.
+ * <p>
+ * Creation is likely to choose {@link #LOOSE} storage.
+ */
+ NEW(true, false),
+
+ /**
+ * The ref is stored in a file by itself.
+ * <p>
+ * Updating this ref affects only this ref.
+ */
+ LOOSE(true, false),
+
+ /**
+ * The ref is stored in the <code>packed-refs</code> file, with
+ * others.
+ * <p>
+ * Updating this ref requires rewriting the file, with perhaps many
+ * other refs being included at the same time.
+ */
+ PACKED(false, true),
+
+ /**
+ * The ref is both {@link #LOOSE} and {@link #PACKED}.
+ * <p>
+ * Updating this ref requires only updating the loose file, but deletion
+ * requires updating both the loose file and the packed refs file.
+ */
+ LOOSE_PACKED(true, true),
+
+ /**
+ * The ref came from a network advertisement and storage is unknown.
+ * <p>
+ * This ref cannot be updated without Git-aware support on the remote
+ * side, as Git-aware code consolidate the remote refs and reported them
+ * to this process.
+ */
+ NETWORK(false, false);
+
+ private final boolean loose;
+
+ private final boolean packed;
+
+ private Storage(final boolean l, final boolean p) {
+ loose = l;
+ packed = p;
+ }
+
+ /**
+ * @return true if this storage has a loose file.
+ */
+ public boolean isLoose() {
+ return loose;
+ }
+
+ /**
+ * @return true if this storage is inside the packed file.
+ */
+ public boolean isPacked() {
+ return packed;
+ }
+ }
+
+ private final Storage storage;
+
private final String name;
private ObjectId objectId;
@@ -54,13 +122,16 @@ public class Ref {
/**
* Create a new ref pairing.
*
+ * @param st
+ * method used to store this ref.
* @param refName
* name of this ref.
* @param id
* current value of the ref. May be null to indicate a ref that
* does not exist yet.
*/
- public Ref(final String refName, final ObjectId id) {
+ public Ref(final Storage st, final String refName, final ObjectId id) {
+ storage = st;
name = refName;
objectId = id;
}
@@ -68,6 +139,8 @@ public class Ref {
/**
* Create a new ref pairing.
*
+ * @param st
+ * method used to store this ref.
* @param refName
* name of this ref.
* @param id
@@ -77,7 +150,9 @@ public class Ref {
* peeled value of the ref's tag. May be null if this is not a
* tag or the peeled value is not known.
*/
- public Ref(final String refName, final ObjectId id, final ObjectId peel) {
+ public Ref(final Storage st, final String refName, final ObjectId id,
+ final ObjectId peel) {
+ storage = st;
name = refName;
objectId = id;
peeledObjectId = peel;
@@ -112,6 +187,18 @@ public class Ref {
return peeledObjectId;
}
+ /**
+ * How was this ref obtained?
+ * <p>
+ * The current storage model of a Ref may influence how the ref must be
+ * updated or deleted from the repository.
+ *
+ * @return type of ref.
+ */
+ public Storage getStorage() {
+ return storage;
+ }
+
public String toString() {
return "Ref[" + name + "=" + getObjectId() + "]";
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/lib/RefDatabase.java b/org.spearce.jgit/src/org/spearce/jgit/lib/RefDatabase.java
index 1857982..9e3e020 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/lib/RefDatabase.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/lib/RefDatabase.java
@@ -126,12 +126,12 @@ class RefDatabase {
RefUpdate newUpdate(final String name) throws IOException {
Ref r = readRefBasic(name, 0);
if (r == null)
- r = new Ref(name, null);
+ r = new Ref(Ref.Storage.NEW, name, null);
return new RefUpdate(this, r, fileForRef(r.getName()));
}
void stored(final String name, final ObjectId id, final long time) {
- looseRefs.put(name, new CachedRef(name, id, time));
+ looseRefs.put(name, new CachedRef(Ref.Storage.LOOSE, name, id, time));
}
/**
@@ -256,7 +256,8 @@ class RefDatabase {
return;
}
- ref = new CachedRef(refName, id, ent.lastModified());
+ ref = new CachedRef(Ref.Storage.LOOSE, refName, id, ent
+ .lastModified());
looseRefs.put(ref.getName(), ref);
avail.put(ref.getName(), ref);
} finally {
@@ -307,7 +308,7 @@ class RefDatabase {
}
if (line == null || line.length() == 0)
- return new Ref(name, null);
+ return new Ref(Ref.Storage.LOOSE, name, null);
if (line.startsWith("ref: ")) {
if (depth >= 5) {
@@ -317,7 +318,7 @@ class RefDatabase {
final String target = line.substring("ref: ".length());
final Ref r = readRefBasic(target, depth + 1);
- return r != null ? r : new Ref(target, null);
+ return r != null ? r : new Ref(Ref.Storage.LOOSE, target, null);
}
final ObjectId id;
@@ -327,7 +328,7 @@ class RefDatabase {
throw new IOException("Not a ref: " + name + ": " + line);
}
- ref = new CachedRef(name, id, mtime);
+ ref = new CachedRef(Ref.Storage.LOOSE, name, id, mtime);
looseRefs.put(name, ref);
return ref;
}
@@ -359,7 +360,8 @@ class RefDatabase {
throw new IOException("Peeled line before ref.");
final ObjectId id = ObjectId.fromString(p.substring(1));
- last = new Ref(last.getName(), last.getObjectId(), id);
+ last = new Ref(Ref.Storage.PACKED, last.getName(), last
+ .getObjectId(), id);
newPackedRefs.put(last.getName(), last);
continue;
}
@@ -367,7 +369,7 @@ class RefDatabase {
final int sp = p.indexOf(' ');
final ObjectId id = ObjectId.fromString(p.substring(0, sp));
final String name = new String(p.substring(sp + 1));
- last = new Ref(name, id);
+ last = new Ref(Ref.Storage.PACKED, name, id);
newPackedRefs.put(last.getName(), last);
}
} finally {
@@ -406,8 +408,9 @@ class RefDatabase {
private static class CachedRef extends Ref {
final long lastModified;
- CachedRef(final String refName, final ObjectId id, final long mtime) {
- super(refName, id);
+ CachedRef(final Storage st, final String refName, final ObjectId id,
+ final long mtime) {
+ super(st, refName, id);
lastModified = mtime;
}
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackConnection.java b/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackConnection.java
index 2ccd422..a878f01 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackConnection.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/BasePackConnection.java
@@ -161,9 +161,11 @@ abstract class BasePackConnection extends BaseConnection {
if (prior.getPeeledObjectId() != null)
throw duplicateAdvertisement(name + "^{}");
- avail.put(name, new Ref(name, prior.getObjectId(), id));
+ avail.put(name, new Ref(Ref.Storage.NETWORK, name, prior
+ .getObjectId(), id));
} else {
- final Ref prior = avail.put(name, new Ref(name, id));
+ final Ref prior;
+ prior = avail.put(name, new Ref(Ref.Storage.NETWORK, name, id));
if (prior != null)
throw duplicateAdvertisement(name);
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportBundle.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportBundle.java
index 1bf081a..6169179 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportBundle.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportBundle.java
@@ -165,7 +165,8 @@ class TransportBundle extends PackTransport {
final String name = line.substring(41, line.length());
final ObjectId id = ObjectId.fromString(line.substring(0, 40));
- final Ref prior = avail.put(name, new Ref(name, id));
+ final Ref prior = avail.put(name, new Ref(Ref.Storage.NETWORK,
+ name, id));
if (prior != null)
throw duplicateAdvertisement(name);
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
index 33f9f90..b18b8e3 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
@@ -235,9 +235,11 @@ class TransportHttp extends WalkTransport {
if (prior.getPeeledObjectId() != null)
throw duplicateAdvertisement(name + "^{}");
- avail.put(name, new Ref(name, prior.getObjectId(), id));
+ avail.put(name, new Ref(Ref.Storage.NETWORK, name, prior
+ .getObjectId(), id));
} else {
- final Ref prior = avail.put(name, new Ref(name, id));
+ final Ref prior = avail.put(name, new Ref(
+ Ref.Storage.NETWORK, name, id));
if (prior != null)
throw duplicateAdvertisement(name);
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
index 21657ef..092c5d3 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
@@ -54,6 +54,7 @@ import org.spearce.jgit.errors.TransportException;
import org.spearce.jgit.lib.ObjectId;
import org.spearce.jgit.lib.Ref;
import org.spearce.jgit.lib.Repository;
+import org.spearce.jgit.lib.Ref.Storage;
import com.jcraft.jsch.Channel;
import com.jcraft.jsch.ChannelSftp;
@@ -277,7 +278,8 @@ class TransportSftp extends WalkTransport {
if (last == null)
throw new TransportException("Peeled line before ref.");
final ObjectId id = ObjectId.fromString(line + 1);
- last = new Ref(last.getName(), last.getObjectId(), id);
+ last = new Ref(Ref.Storage.PACKED, last.getName(), last
+ .getObjectId(), id);
avail.put(last.getName(), last);
continue;
}
@@ -287,7 +289,7 @@ class TransportSftp extends WalkTransport {
throw new TransportException("Unrecognized ref: " + line);
final ObjectId id = ObjectId.fromString(line.substring(0, sp));
final String name = line.substring(sp + 1);
- last = new Ref(name, id);
+ last = new Ref(Ref.Storage.PACKED, name, id);
avail.put(last.getName(), last);
}
}
@@ -342,14 +344,16 @@ class TransportSftp extends WalkTransport {
if (r == null)
r = avail.get(p);
if (r != null) {
- r = new Ref(name, r.getObjectId(), r.getPeeledObjectId());
+ r = new Ref(loose(r), name, r.getObjectId(), r
+ .getPeeledObjectId());
avail.put(name, r);
}
return r;
}
if (ObjectId.isId(line)) {
- final Ref r = new Ref(name, ObjectId.fromString(line));
+ final Ref r = new Ref(loose(avail.get(name)), name, ObjectId
+ .fromString(line));
avail.put(r.getName(), r);
return r;
}
@@ -357,6 +361,12 @@ class TransportSftp extends WalkTransport {
throw new TransportException("Bad ref: " + name + ": " + line);
}
+ private Storage loose(final Ref r) {
+ if (r != null && r.getStorage() == Storage.PACKED)
+ return Storage.LOOSE_PACKED;
+ return Storage.LOOSE;
+ }
+
@Override
void close() {
if (ftp != null) {
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 10/21] Simplify walker transport ref advertisement setup
2008-06-29 7:59 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 11/21] Indicate the protocol jgit doesn't support push over Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Robin Rosenberg
1 sibling, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
We need to perform the same advertisement setup during push for
any of these protocols but we won't have a WalkFetchConnection.
Returning the map simplifies the code and allows it to be used
for the push variant.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../org/spearce/jgit/transport/TransportHttp.java | 15 +++++++--------
.../org/spearce/jgit/transport/TransportSftp.java | 8 ++++----
2 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
index b18b8e3..231dbfe 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
@@ -50,6 +50,7 @@ import java.net.URL;
import java.net.URLConnection;
import java.util.ArrayList;
import java.util.Collection;
+import java.util.Map;
import java.util.TreeMap;
import org.spearce.jgit.errors.NotSupportedException;
@@ -102,7 +103,7 @@ class TransportHttp extends WalkTransport {
public FetchConnection openFetch() throws TransportException {
final HttpObjectDB c = new HttpObjectDB(objectsUrl);
final WalkFetchConnection r = new WalkFetchConnection(this, c);
- c.readAdvertisedRefs(r);
+ r.available(c.readAdvertisedRefs());
return r;
}
@@ -188,12 +189,11 @@ class TransportHttp extends WalkTransport {
}
}
- void readAdvertisedRefs(final WalkFetchConnection c)
- throws TransportException {
+ Map<String, Ref> readAdvertisedRefs() throws TransportException {
try {
final BufferedReader br = openReader(INFO_REFS);
try {
- readAdvertisedImpl(br, c);
+ return readAdvertisedImpl(br);
} finally {
br.close();
}
@@ -208,9 +208,8 @@ class TransportHttp extends WalkTransport {
}
}
- private void readAdvertisedImpl(final BufferedReader br,
- final WalkFetchConnection connection) throws IOException,
- PackProtocolException {
+ private Map<String, Ref> readAdvertisedImpl(final BufferedReader br)
+ throws IOException, PackProtocolException {
final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
for (;;) {
String line = br.readLine();
@@ -244,7 +243,7 @@ class TransportHttp extends WalkTransport {
throw duplicateAdvertisement(name);
}
}
- connection.available(avail);
+ return avail;
}
private PackProtocolException outOfOrderAdvertisement(final String n) {
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
index 092c5d3..c2f34f7 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
@@ -48,6 +48,7 @@ import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
+import java.util.Map;
import java.util.TreeMap;
import org.spearce.jgit.errors.TransportException;
@@ -96,7 +97,7 @@ class TransportSftp extends WalkTransport {
public FetchConnection openFetch() throws TransportException {
final SftpObjectDB c = new SftpObjectDB(uri.getPath());
final WalkFetchConnection r = new WalkFetchConnection(this, c);
- c.readAdvertisedRefs(r);
+ r.available(c.readAdvertisedRefs());
return r;
}
@@ -245,8 +246,7 @@ class TransportSftp extends WalkTransport {
}
}
- void readAdvertisedRefs(final WalkFetchConnection connection)
- throws TransportException {
+ Map<String, Ref> readAdvertisedRefs() throws TransportException {
final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
try {
final BufferedReader br = openReader("../packed-refs");
@@ -262,7 +262,7 @@ class TransportSftp extends WalkTransport {
}
readRef(avail, "../HEAD", "HEAD");
readLooseRefs(avail, "../refs", "refs/");
- connection.available(avail);
+ return avail;
}
private void readPackedRefs(final TreeMap<String, Ref> avail,
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 11/21] Indicate the protocol jgit doesn't support push over
2008-06-29 7:59 ` [JGIT PATCH 10/21] Simplify walker transport ref advertisement setup Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 12/21] WalkTransport must allow subclasses to implement openPush Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
Not all of jgit's protocols will support push, as it may take time
to build their implementations or simply prove impractical. Some
users may not understand what a walking transport is, but they can
understand that http isn't supported for push.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../org/spearce/jgit/transport/WalkTransport.java | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
index 29dd661..e208b12 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
@@ -59,7 +59,7 @@ abstract class WalkTransport extends Transport {
@Override
public PushConnection openPush() throws NotSupportedException {
- throw new NotSupportedException(
- "Push is not supported by object walking transports");
+ final String s = getURI().getScheme();
+ throw new NotSupportedException("Push not supported over " + s + ".");
}
}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 12/21] WalkTransport must allow subclasses to implement openPush
2008-06-29 7:59 ` [JGIT PATCH 11/21] Indicate the protocol jgit doesn't support push over Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 13/21] Support push over the sftp:// dumb transport Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
If a walk implementation actually supports push (such as HTTP
push via WebDAV) we need to override openPush to return back
a valid PushConnection, however construction could fail with
a TransportException.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../org/spearce/jgit/transport/WalkTransport.java | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
index e208b12..b3ea4aa 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkTransport.java
@@ -38,6 +38,7 @@
package org.spearce.jgit.transport;
import org.spearce.jgit.errors.NotSupportedException;
+import org.spearce.jgit.errors.TransportException;
import org.spearce.jgit.lib.Repository;
/**
@@ -58,7 +59,8 @@ abstract class WalkTransport extends Transport {
}
@Override
- public PushConnection openPush() throws NotSupportedException {
+ public PushConnection openPush() throws NotSupportedException,
+ TransportException {
final String s = getURI().getScheme();
throw new NotSupportedException("Push not supported over " + s + ".");
}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 13/21] Support push over the sftp:// dumb transport
2008-06-29 7:59 ` [JGIT PATCH 12/21] WalkTransport must allow subclasses to implement openPush Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 14/21] Extract readPackedRefs from TransportSftp for reuse Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
jgit now allows users to push changes over the sftp:// protocol,
taking advantage of the SFTP client available as part of JSch to
(more or less) safely update the remote repository.
Since locking is not available over SFTP this is not suitable for
use with concurrent pushes, but is safe with concurrent fetches
during a single push. This is sufficient support to safely update
a Git repository published over HTTP where the only means of making
changes is through SSH/SFTP and the remote side does not make Git
available through the shell, or does not offer direct shell access.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../org/spearce/jgit/transport/TransportHttp.java | 2 -
.../org/spearce/jgit/transport/TransportSftp.java | 109 +++++++-
.../spearce/jgit/transport/WalkPushConnection.java | 296 ++++++++++++++++++++
.../jgit/transport/WalkRemoteObjectDatabase.java | 243 ++++++++++++++++
4 files changed, 647 insertions(+), 3 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/WalkPushConnection.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
index 231dbfe..4655950 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
@@ -112,8 +112,6 @@ class TransportHttp extends WalkTransport {
}
class HttpObjectDB extends WalkRemoteObjectDatabase {
- private static final String INFO_REFS = "../info/refs";
-
private final URL objectsUrl;
HttpObjectDB(final URL b) {
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
index c2f34f7..e5db6cc 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
@@ -40,6 +40,7 @@ package org.spearce.jgit.transport;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.IOException;
+import java.io.OutputStream;
import java.net.ConnectException;
import java.net.UnknownHostException;
import java.util.ArrayList;
@@ -78,6 +79,9 @@ import com.jcraft.jsch.SftpException;
* listing files through SFTP we can avoid needing to have current
* <code>objects/info/packs</code> or <code>info/refs</code> files on the
* remote repository and access the data directly, much as Git itself would.
+ * <p>
+ * Concurrent pushing over this transport is not supported. Multiple concurrent
+ * push operations may cause confusion in the repository state.
*
* @see WalkFetchConnection
*/
@@ -101,6 +105,14 @@ class TransportSftp extends WalkTransport {
return r;
}
+ @Override
+ public PushConnection openPush() throws TransportException {
+ final SftpObjectDB c = new SftpObjectDB(uri.getPath());
+ final WalkPushConnection r = new WalkPushConnection(this, c);
+ r.available(c.readAdvertisedRefs());
+ return r;
+ }
+
Session openSession() throws TransportException {
final String user = uri.getUser();
final String pass = uri.getPass();
@@ -246,10 +258,105 @@ class TransportSftp extends WalkTransport {
}
}
+ @Override
+ void deleteFile(final String path) throws IOException {
+ try {
+ ftp.rm(path);
+ } catch (SftpException je) {
+ if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE)
+ return;
+ throw new TransportException("Can't delete " + objectsPath
+ + "/" + path + ": " + je.getMessage(), je);
+ }
+
+ // Prune any now empty directories.
+ //
+ String dir = path;
+ int s = dir.lastIndexOf('/');
+ while (s > 0) {
+ try {
+ dir = dir.substring(0, s);
+ ftp.rmdir(dir);
+ s = dir.lastIndexOf('/');
+ } catch (SftpException je) {
+ // If we cannot delete it, leave it alone. It may have
+ // entries still in it, or maybe we lack write access on
+ // the parent. Either way it isn't a fatal error.
+ //
+ break;
+ }
+ }
+ }
+
+ @Override
+ OutputStream writeFile(final String path) throws IOException {
+ try {
+ return ftp.put(path);
+ } catch (SftpException je) {
+ if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE) {
+ mkdir_p(path);
+ try {
+ return ftp.put(path);
+ } catch (SftpException je2) {
+ je = je2;
+ }
+ }
+
+ throw new TransportException("Can't write " + objectsPath + "/"
+ + path + ": " + je.getMessage(), je);
+ }
+ }
+
+ @Override
+ void writeFile(final String path, final byte[] data) throws IOException {
+ final String lock = path + ".lock";
+ try {
+ super.writeFile(lock, data);
+ try {
+ ftp.rename(lock, path);
+ } catch (SftpException je) {
+ throw new TransportException("Can't write " + objectsPath
+ + "/" + path + ": " + je.getMessage(), je);
+ }
+ } catch (IOException err) {
+ try {
+ ftp.rm(lock);
+ } catch (SftpException e) {
+ // Ignore deletion failure, we are already
+ // failing anyway.
+ }
+ throw err;
+ }
+ }
+
+ private void mkdir_p(String path) throws IOException {
+ final int s = path.lastIndexOf('/');
+ if (s <= 0)
+ return;
+
+ path = path.substring(0, s);
+ try {
+ ftp.mkdir(path);
+ } catch (SftpException je) {
+ if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE) {
+ mkdir_p(path);
+ try {
+ ftp.mkdir(path);
+ return;
+ } catch (SftpException je2) {
+ je = je2;
+ }
+ }
+
+ throw new TransportException("Can't mkdir " + objectsPath + "/"
+ + path + ": " + je.getMessage(), je);
+ }
+ }
+
Map<String, Ref> readAdvertisedRefs() throws TransportException {
final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
try {
- final BufferedReader br = openReader("../packed-refs");
+ final BufferedReader br = openReader(PACKED_REFS);
try {
readPackedRefs(avail, br);
} finally {
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkPushConnection.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkPushConnection.java
new file mode 100644
index 0000000..ab16f65
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkPushConnection.java
@@ -0,0 +1,296 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.transport;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.spearce.jgit.errors.TransportException;
+import org.spearce.jgit.lib.AnyObjectId;
+import org.spearce.jgit.lib.ObjectId;
+import org.spearce.jgit.lib.PackWriter;
+import org.spearce.jgit.lib.ProgressMonitor;
+import org.spearce.jgit.lib.Ref;
+import org.spearce.jgit.lib.Repository;
+import org.spearce.jgit.lib.Ref.Storage;
+import org.spearce.jgit.transport.RemoteRefUpdate.Status;
+
+/**
+ * Generic push support for dumb transport protocols.
+ * <p>
+ * Since there are no Git-specific smarts on the remote side of the connection
+ * the client side must handle everything on its own. The generic push support
+ * requires being able to delete, create and overwrite files on the remote side,
+ * as well as create any missing directories (if necessary). Typically this can
+ * be handled through an FTP style protocol.
+ * <p>
+ * Objects not on the remote side are uploaded as pack files, using one pack
+ * file per invocation. This simplifies the implementation as only two data
+ * files need to be written to the remote repository.
+ * <p>
+ * Push support supplied by this class is not multiuser safe. Concurrent pushes
+ * to the same repository may yield an inconsistent reference database which may
+ * confuse fetch clients.
+ * <p>
+ * A single push is concurrently safe with multiple fetch requests, due to the
+ * careful order of operations used to update the repository. Clients fetching
+ * may receive transient failures due to short reads on certain files if the
+ * protocol does not support atomic file replacement.
+ *
+ * @see WalkRemoteObjectDatabase
+ */
+class WalkPushConnection extends BaseConnection implements PushConnection {
+ /** The repository this transport pushes out of. */
+ private final Repository local;
+
+ /** Location of the remote repository we are writing to. */
+ private final URIish uri;
+
+ /** Database connection to the remote repository. */
+ private final WalkRemoteObjectDatabase dest;
+
+ /**
+ * Packs already known to reside in the remote repository.
+ * <p>
+ * This is a LinkedHashMap to maintain the original order.
+ */
+ private LinkedHashMap<String, String> packNames;
+
+ /** Complete listing of refs the remote will have after our push. */
+ private Map<String, Ref> newRefs;
+
+ /**
+ * Updates which require altering the packed-refs file to complete.
+ * <p>
+ * If this collection is non-empty then any refs listed in {@link #newRefs}
+ * with a storage class of {@link Storage#PACKED} will be written.
+ */
+ private Collection<RemoteRefUpdate> packedRefUpdates;
+
+ WalkPushConnection(final WalkTransport walkTransport,
+ final WalkRemoteObjectDatabase w) {
+ local = walkTransport.local;
+ uri = walkTransport.getURI();
+ dest = w;
+ }
+
+ public void push(final ProgressMonitor monitor,
+ final Map<String, RemoteRefUpdate> refUpdates)
+ throws TransportException {
+ markStartedOperation();
+ packNames = null;
+ newRefs = new TreeMap<String, Ref>(getRefsMap());
+ packedRefUpdates = new ArrayList<RemoteRefUpdate>(refUpdates.size());
+
+ // Filter the commands and issue all deletes first. This way we
+ // can correctly handle a directory being cleared out and a new
+ // ref using the directory name being created.
+ //
+ final List<RemoteRefUpdate> updates = new ArrayList<RemoteRefUpdate>();
+ for (final RemoteRefUpdate u : refUpdates.values()) {
+ if (AnyObjectId.equals(ObjectId.zeroId(), u.getNewObjectId()))
+ deleteCommand(u);
+ else
+ updates.add(u);
+ }
+
+ // If we have any updates we need to upload the objects first, to
+ // prevent creating refs pointing at non-existant data. Then we
+ // can update the refs, and the info-refs file for dumb transports.
+ //
+ if (!updates.isEmpty())
+ sendpack(updates, monitor);
+ for (final RemoteRefUpdate u : updates)
+ updateCommand(u);
+
+ if (!packedRefUpdates.isEmpty()) {
+ try {
+ dest.writePackedRefs(newRefs.values());
+ for (final RemoteRefUpdate u : packedRefUpdates)
+ u.setStatus(Status.OK);
+ } catch (IOException err) {
+ for (final RemoteRefUpdate u : packedRefUpdates) {
+ u.setStatus(Status.REJECTED_OTHER_REASON);
+ u.setMessage(err.getMessage());
+ }
+ throw new TransportException(uri, "failed updating refs", err);
+ }
+ }
+
+ try {
+ dest.writeInfoRefs(newRefs.values());
+ } catch (IOException err) {
+ throw new TransportException(uri, "failed updating refs", err);
+ }
+ }
+
+ @Override
+ public void close() {
+ dest.close();
+ }
+
+ private void sendpack(final List<RemoteRefUpdate> updates,
+ final ProgressMonitor monitor) throws TransportException {
+ String pathPack = null;
+ String pathIdx = null;
+
+ try {
+ final PackWriter pw = new PackWriter(local, monitor);
+ final List<ObjectId> need = new ArrayList<ObjectId>();
+ final List<ObjectId> have = new ArrayList<ObjectId>();
+ for (final RemoteRefUpdate r : updates)
+ need.add(r.getNewObjectId());
+ for (final Ref r : getRefs()) {
+ have.add(r.getObjectId());
+ if (r.getPeeledObjectId() != null)
+ have.add(r.getPeeledObjectId());
+ }
+ pw.preparePack(need, have, false, true);
+
+ // We don't have to continue further if the pack will
+ // be an empty pack, as the remote has all objects it
+ // needs to complete this change.
+ //
+ if (pw.getObjectsNumber() == 0)
+ return;
+
+ packNames = new LinkedHashMap<String, String>();
+ for (final String n : dest.getPackNames())
+ packNames.put(n, n);
+
+ final String base = "pack-" + pw.computeName();
+ final String packName = base + ".pack";
+ pathPack = "pack/" + packName;
+ pathIdx = "pack/" + base + ".idx";
+
+ if (packNames.remove(packName) != null) {
+ // The remote already contains this pack. We should
+ // remove the index before overwriting to prevent bad
+ // offsets from appearing to clients.
+ //
+ dest.writeInfoPacks(packNames.keySet());
+ dest.deleteFile(pathIdx);
+ }
+
+ // Write the pack file, then the index, as readers look the
+ // other direction (index, then pack file).
+ //
+ OutputStream os = dest.writeFile(pathPack);
+ try {
+ pw.writePack(os);
+ } finally {
+ os.close();
+ }
+
+ os = dest.writeFile(pathIdx);
+ try {
+ pw.writeIndex(os);
+ } finally {
+ os.close();
+ }
+
+ // Record the pack at the start of the pack info list. This
+ // way clients are likely to consult the newest pack first,
+ // and discover the most recent objects there.
+ //
+ final ArrayList<String> infoPacks = new ArrayList<String>();
+ infoPacks.add(packName);
+ infoPacks.addAll(packNames.keySet());
+ dest.writeInfoPacks(infoPacks);
+
+ } catch (IOException err) {
+ safeDelete(pathIdx);
+ safeDelete(pathPack);
+
+ throw new TransportException(uri, "cannot store objects", err);
+ }
+ }
+
+ private void safeDelete(final String path) {
+ if (path != null) {
+ try {
+ dest.deleteFile(path);
+ } catch (IOException cleanupFailure) {
+ // Ignore the deletion failure. We probably are
+ // already failing and were just trying to pick
+ // up after ourselves.
+ }
+ }
+ }
+
+ private void deleteCommand(final RemoteRefUpdate u) {
+ final Ref r = newRefs.remove(u.getRemoteName());
+ if (r == null) {
+ // Already gone.
+ //
+ u.setStatus(Status.OK);
+ return;
+ }
+
+ if (r.getStorage().isPacked())
+ packedRefUpdates.add(u);
+
+ if (r.getStorage().isLoose()) {
+ try {
+ dest.deleteRef(u.getRemoteName());
+ u.setStatus(Status.OK);
+ } catch (IOException e) {
+ u.setStatus(Status.REJECTED_OTHER_REASON);
+ u.setMessage(e.getMessage());
+ }
+ }
+ }
+
+ private void updateCommand(final RemoteRefUpdate u) {
+ try {
+ dest.writeRef(u.getRemoteName(), u.getNewObjectId());
+ newRefs.put(u.getRemoteName(), new Ref(Storage.LOOSE, u
+ .getRemoteName(), u.getNewObjectId()));
+ u.setStatus(Status.OK);
+ } catch (IOException e) {
+ u.setStatus(Status.REJECTED_OTHER_REASON);
+ u.setMessage(e.getMessage());
+ }
+ }
+}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
index 2196fc9..57d525f 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
@@ -43,10 +43,14 @@ import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.StringWriter;
import java.util.ArrayList;
import java.util.Collection;
import org.spearce.jgit.lib.Constants;
+import org.spearce.jgit.lib.ObjectId;
+import org.spearce.jgit.lib.Ref;
import org.spearce.jgit.util.NB;
/**
@@ -68,6 +72,10 @@ abstract class WalkRemoteObjectDatabase {
static final String INFO_HTTP_ALTERNATES = "info/http-alternates";
+ static final String INFO_REFS = "../info/refs";
+
+ static final String PACKED_REFS = "../packed-refs";
+
/**
* Obtain the list of available packs (if any).
* <p>
@@ -163,6 +171,241 @@ abstract class WalkRemoteObjectDatabase {
abstract void close();
/**
+ * Delete a file from the object database.
+ * <p>
+ * Path may start with <code>../</code> to request deletion of a file that
+ * resides in the repository itself.
+ * <p>
+ * When possible empty directories must be removed, up to but not including
+ * the current object database directory itself.
+ * <p>
+ * This method does not support deletion of directories.
+ *
+ * @param path
+ * name of the item to be removed, relative to the current object
+ * database.
+ * @throws IOException
+ * deletion is not supported, or deletion failed.
+ */
+ void deleteFile(final String path) throws IOException {
+ throw new IOException("Deleting '" + path + "' not supported.");
+ }
+
+ /**
+ * Open a remote file for writing.
+ * <p>
+ * Path may start with <code>../</code> to request writing of a file that
+ * resides in the repository itself.
+ * <p>
+ * The requested path may or may not exist. If the path already exists as a
+ * file the file should be truncated and completely replaced.
+ * <p>
+ * This method creates any missing parent directories, if necessary.
+ *
+ * @param path
+ * name of the file to write, relative to the current object
+ * database.
+ * @return stream to write into this file. Caller must close the stream to
+ * complete the write request. The stream is not buffered and each
+ * write may cause a network request/response so callers should
+ * buffer to smooth out small writes.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ OutputStream writeFile(final String path) throws IOException {
+ throw new IOException("Writing of '" + path + "' not supported.");
+ }
+
+ /**
+ * Atomically write a remote file.
+ * <p>
+ * This method attempts to perform as atomic of an update as it can,
+ * reducing (or eliminating) the time that clients might be able to see
+ * partial file content. This method is not suitable for very large
+ * transfers as the complete content must be passed as an argument.
+ * <p>
+ * Path may start with <code>../</code> to request writing of a file that
+ * resides in the repository itself.
+ * <p>
+ * The requested path may or may not exist. If the path already exists as a
+ * file the file should be truncated and completely replaced.
+ * <p>
+ * This method creates any missing parent directories, if necessary.
+ *
+ * @param path
+ * name of the file to write, relative to the current object
+ * database.
+ * @param data
+ * complete new content of the file.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ void writeFile(final String path, final byte[] data) throws IOException {
+ final OutputStream os = writeFile(path);
+ try {
+ os.write(data);
+ } finally {
+ os.close();
+ }
+ }
+
+ /**
+ * Delete a loose ref from the remote repository.
+ *
+ * @param name
+ * name of the ref within the ref space, for example
+ * <code>refs/heads/pu</code>.
+ * @throws IOException
+ * deletion is not supported, or deletion failed.
+ */
+ void deleteRef(final String name) throws IOException {
+ deleteFile("../" + name);
+ }
+
+ /**
+ * Overwrite (or create) a loose ref in the remote repository.
+ * <p>
+ * This method creates any missing parent directories, if necessary.
+ *
+ * @param name
+ * name of the ref within the ref space, for example
+ * <code>refs/heads/pu</code>.
+ * @param value
+ * new value to store in this ref. Must not be null.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ void writeRef(final String name, final ObjectId value) throws IOException {
+ final ByteArrayOutputStream b;
+
+ b = new ByteArrayOutputStream(Constants.OBJECT_ID_LENGTH * 2 + 1);
+ value.copyTo(b);
+ b.write('\n');
+
+ writeFile("../" + name, b.toByteArray());
+ }
+
+ /**
+ * Rebuild the {@link #INFO_PACKS} for dumb transport clients.
+ * <p>
+ * This method rebuilds the contents of the {@link #INFO_PACKS} file to
+ * match the passed list of pack names.
+ *
+ * @param packNames
+ * names of available pack files, in the order they should appear
+ * in the file. Valid pack name strings are of the form
+ * <code>pack-035760ab452d6eebd123add421f253ce7682355a.pack</code>.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ void writeInfoPacks(final Collection<String> packNames) throws IOException {
+ final StringBuilder w = new StringBuilder();
+ for (final String n : packNames) {
+ w.append("P ");
+ w.append(n);
+ w.append('\n');
+ }
+ writeFile(INFO_PACKS, Constants.encodeASCII(w.toString()));
+ }
+
+ /**
+ * Rebuild the {@link #INFO_REFS} for dumb transport clients.
+ * <p>
+ * This method rebuilds the contents of the {@link #INFO_REFS} file to match
+ * the passed list of references.
+ *
+ * @param refs
+ * the complete set of references the remote side now has. This
+ * should have been computed by applying updates to the
+ * advertised refs already discovered.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ void writeInfoRefs(final Collection<Ref> refs) throws IOException {
+ final StringWriter w = new StringWriter();
+ final char[] tmp = new char[Constants.OBJECT_ID_LENGTH * 2];
+ for (final Ref r : refs) {
+ if (Constants.HEAD.equals(r.getName())) {
+ // Historically HEAD has never been published through
+ // the INFO_REFS file. This is a mistake, but its the
+ // way things are.
+ //
+ continue;
+ }
+
+ r.getObjectId().copyTo(tmp, w);
+ w.write('\t');
+ w.write(r.getName());
+ w.write('\n');
+
+ if (r.getPeeledObjectId() != null) {
+ r.getPeeledObjectId().copyTo(tmp, w);
+ w.write('\t');
+ w.write(r.getName());
+ w.write("^{}\n");
+ }
+ }
+ writeFile(INFO_REFS, Constants.encodeASCII(w.toString()));
+ }
+
+ /**
+ * Rebuild the {@link #PACKED_REFS} file.
+ * <p>
+ * This method rebuilds the contents of the {@link #PACKED_REFS} file to
+ * match the passed list of references, including only those refs that have
+ * a storage type of {@link Ref.Storage#PACKED}.
+ *
+ * @param refs
+ * the complete set of references the remote side now has. This
+ * should have been computed by applying updates to the
+ * advertised refs already discovered.
+ * @throws IOException
+ * writing is not supported, or attempting to write the file
+ * failed, possibly due to permissions or remote disk full, etc.
+ */
+ void writePackedRefs(final Collection<Ref> refs) throws IOException {
+ boolean peeled = false;
+
+ for (final Ref r : refs) {
+ if (r.getStorage() != Ref.Storage.PACKED)
+ continue;
+ if (r.getPeeledObjectId() != null)
+ peeled = true;
+ }
+
+ final StringWriter w = new StringWriter();
+ if (peeled) {
+ w.write("# pack-refs with:");
+ if (peeled)
+ w.write(" peeled");
+ w.write('\n');
+ }
+
+ final char[] tmp = new char[Constants.OBJECT_ID_LENGTH * 2];
+ for (final Ref r : refs) {
+ if (r.getStorage() != Ref.Storage.PACKED)
+ continue;
+
+ r.getObjectId().copyTo(tmp, w);
+ w.write(' ');
+ w.write(r.getName());
+ w.write('\n');
+
+ if (r.getPeeledObjectId() != null) {
+ w.write('^');
+ r.getPeeledObjectId().copyTo(tmp, w);
+ w.write('\n');
+ }
+ }
+ writeFile(PACKED_REFS, Constants.encodeASCII(w.toString()));
+ }
+
+ /**
* Open a buffered reader around a file.
* <p>
* This is shorthand for calling {@link #open(String)} and then wrapping it
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 14/21] Extract readPackedRefs from TransportSftp for reuse
2008-06-29 7:59 ` [JGIT PATCH 13/21] Support push over the sftp:// dumb transport Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 15/21] Specialized byte array output stream for large files Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
Other dumb transports may need this functionality available to them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../org/spearce/jgit/transport/TransportHttp.java | 5 ++
.../org/spearce/jgit/transport/TransportSftp.java | 47 ++--------------
.../src/org/spearce/jgit/transport/URIish.java | 22 ++++++++
.../jgit/transport/WalkRemoteObjectDatabase.java | 58 ++++++++++++++++++++
4 files changed, 91 insertions(+), 41 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
index 4655950..2f28f2c 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
@@ -119,6 +119,11 @@ class TransportHttp extends WalkTransport {
}
@Override
+ URIish getURI() {
+ return new URIish(objectsUrl);
+ }
+
+ @Override
Collection<WalkRemoteObjectDatabase> getAlternates() throws IOException {
try {
return readAlternates(INFO_HTTP_ALTERNATES);
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
index e5db6cc..a33406b 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportSftp.java
@@ -193,6 +193,11 @@ class TransportSftp extends WalkTransport {
}
@Override
+ URIish getURI() {
+ return uri.setPath(objectsPath);
+ }
+
+ @Override
Collection<WalkRemoteObjectDatabase> getAlternates() throws IOException {
try {
return readAlternates(INFO_ALTERNATES);
@@ -355,52 +360,12 @@ class TransportSftp extends WalkTransport {
Map<String, Ref> readAdvertisedRefs() throws TransportException {
final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
- try {
- final BufferedReader br = openReader(PACKED_REFS);
- try {
- readPackedRefs(avail, br);
- } finally {
- br.close();
- }
- } catch (FileNotFoundException notPacked) {
- // Perhaps it wasn't worthwhile, or is just an older repository.
- } catch (IOException e) {
- throw new TransportException(uri, "error in packed-refs", e);
- }
+ readPackedRefs(avail);
readRef(avail, "../HEAD", "HEAD");
readLooseRefs(avail, "../refs", "refs/");
return avail;
}
- private void readPackedRefs(final TreeMap<String, Ref> avail,
- final BufferedReader br) throws IOException {
- Ref last = null;
- for (;;) {
- String line = br.readLine();
- if (line == null)
- break;
- if (line.charAt(0) == '#')
- continue;
- if (line.charAt(0) == '^') {
- if (last == null)
- throw new TransportException("Peeled line before ref.");
- final ObjectId id = ObjectId.fromString(line + 1);
- last = new Ref(Ref.Storage.PACKED, last.getName(), last
- .getObjectId(), id);
- avail.put(last.getName(), last);
- continue;
- }
-
- final int sp = line.indexOf(' ');
- if (sp < 0)
- throw new TransportException("Unrecognized ref: " + line);
- final ObjectId id = ObjectId.fromString(line.substring(0, sp));
- final String name = line.substring(sp + 1);
- last = new Ref(Ref.Storage.PACKED, name, id);
- avail.put(last.getName(), last);
- }
- }
-
private void readLooseRefs(final TreeMap<String, Ref> avail,
final String dir, final String prefix)
throws TransportException {
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java b/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
index 307b591..9e7ca83 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
@@ -39,6 +39,7 @@
package org.spearce.jgit.transport;
import java.net.URISyntaxException;
+import java.net.URL;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
@@ -100,6 +101,27 @@ public class URIish {
}
}
+ /**
+ * Construct a URIish from a standard URL.
+ *
+ * @param u
+ * the source URL to convert from.
+ */
+ public URIish(final URL u) {
+ scheme = u.getProtocol();
+ path = u.getPath();
+
+ final String ui = u.getUserInfo();
+ if (ui != null) {
+ final int d = ui.indexOf(':');
+ user = d < 0 ? ui : ui.substring(0, d);
+ pass = d < 0 ? null : ui.substring(d + 1);
+ }
+
+ port = u.getPort();
+ host = u.getHost();
+ }
+
/** Create an empty, non-configured URI. */
public URIish() {
// Configure nothing.
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
index 57d525f..4f5a1cb 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkRemoteObjectDatabase.java
@@ -47,7 +47,9 @@ import java.io.OutputStream;
import java.io.StringWriter;
import java.util.ArrayList;
import java.util.Collection;
+import java.util.Map;
+import org.spearce.jgit.errors.TransportException;
import org.spearce.jgit.lib.Constants;
import org.spearce.jgit.lib.ObjectId;
import org.spearce.jgit.lib.Ref;
@@ -76,6 +78,8 @@ abstract class WalkRemoteObjectDatabase {
static final String PACKED_REFS = "../packed-refs";
+ abstract URIish getURI();
+
/**
* Obtain the list of available packs (if any).
* <p>
@@ -469,6 +473,60 @@ abstract class WalkRemoteObjectDatabase {
}
}
+ /**
+ * Read a standard Git packed-refs file to discover known references.
+ *
+ * @param avail
+ * return collection of references. Any existing entries will be
+ * replaced if they are found in the packed-refs file.
+ * @throws TransportException
+ * an error occurred reading from the packed refs file.
+ */
+ protected void readPackedRefs(final Map<String, Ref> avail)
+ throws TransportException {
+ try {
+ final BufferedReader br = openReader(PACKED_REFS);
+ try {
+ readPackedRefsImpl(avail, br);
+ } finally {
+ br.close();
+ }
+ } catch (FileNotFoundException notPacked) {
+ // Perhaps it wasn't worthwhile, or is just an older repository.
+ } catch (IOException e) {
+ throw new TransportException(getURI(), "error in packed-refs", e);
+ }
+ }
+
+ private void readPackedRefsImpl(final Map<String, Ref> avail,
+ final BufferedReader br) throws IOException {
+ Ref last = null;
+ for (;;) {
+ String line = br.readLine();
+ if (line == null)
+ break;
+ if (line.charAt(0) == '#')
+ continue;
+ if (line.charAt(0) == '^') {
+ if (last == null)
+ throw new TransportException("Peeled line before ref.");
+ final ObjectId id = ObjectId.fromString(line + 1);
+ last = new Ref(Ref.Storage.PACKED, last.getName(), last
+ .getObjectId(), id);
+ avail.put(last.getName(), last);
+ continue;
+ }
+
+ final int sp = line.indexOf(' ');
+ if (sp < 0)
+ throw new TransportException("Unrecognized ref: " + line);
+ final ObjectId id = ObjectId.fromString(line.substring(0, sp));
+ final String name = line.substring(sp + 1);
+ last = new Ref(Ref.Storage.PACKED, name, id);
+ avail.put(last.getName(), last);
+ }
+ }
+
static final class FileStream {
final InputStream in;
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 15/21] Specialized byte array output stream for large files
2008-06-29 7:59 ` [JGIT PATCH 14/21] Extract readPackedRefs from TransportSftp for reuse Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
Some transports may require that we know the total byte count (and
perhaps MD5 checksum) of a pack file before we can send it to the
transport during a push operation. Materializing the pack locally
prior to transfer can be somewhat costly, but may be able to be in
core for very small packs.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/util/TemporaryBuffer.java | 260 ++++++++++++++++++++
1 files changed, 260 insertions(+), 0 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/TemporaryBuffer.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/util/TemporaryBuffer.java b/org.spearce.jgit/src/org/spearce/jgit/util/TemporaryBuffer.java
new file mode 100644
index 0000000..72bdbb1
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/util/TemporaryBuffer.java
@@ -0,0 +1,260 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.util;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+
+import org.spearce.jgit.lib.NullProgressMonitor;
+import org.spearce.jgit.lib.ProgressMonitor;
+
+/**
+ * A fully buffered output stream using local disk storage for large data.
+ * <p>
+ * Initially this output stream buffers to memory, like ByteArrayOutputStream
+ * might do, but it shifts to using an on disk temporary file if the output gets
+ * too large.
+ * <p>
+ * The content of this buffered stream may be sent to another OutputStream only
+ * after this stream has been properly closed by {@link #close()}.
+ */
+public class TemporaryBuffer extends OutputStream {
+ private static final int DEFAULT_IN_CORE_LIMIT = 1024 * 1024;
+
+ /** Chain of data, if we are still completely in-core; otherwise null. */
+ private ArrayList<Block> blocks;
+
+ /**
+ * Maximum number of bytes we will permit storing in memory.
+ * <p>
+ * When this limit is reached the data will be shifted to a file on disk,
+ * preventing the JVM heap from growing out of control.
+ */
+ private int inCoreLimit;
+
+ /**
+ * Location of our temporary file if we are on disk; otherwise null.
+ * <p>
+ * If we exceeded the {@link #inCoreLimit} we nulled out {@link #blocks} and
+ * created this file instead. All output goes here through {@link #diskOut}.
+ */
+ private File onDiskFile;
+
+ /** If writing to {@link #onDiskFile} this is a buffered stream to it. */
+ private OutputStream diskOut;
+
+ /** Create a new empty temporary buffer. */
+ public TemporaryBuffer() {
+ inCoreLimit = DEFAULT_IN_CORE_LIMIT;
+ blocks = new ArrayList<Block>(inCoreLimit / Block.SZ);
+ blocks.add(new Block());
+ }
+
+ @Override
+ public void write(final int b) throws IOException {
+ if (blocks == null) {
+ diskOut.write(b);
+ return;
+ }
+
+ Block s = last();
+ if (s.isFull()) {
+ if (reachedInCoreLimit()) {
+ diskOut.write(b);
+ return;
+ }
+
+ s = new Block();
+ blocks.add(s);
+ }
+ s.buffer[s.count++] = (byte) b;
+ }
+
+ @Override
+ public void write(final byte[] b, int off, int len) throws IOException {
+ if (blocks != null) {
+ while (len > 0) {
+ Block s = last();
+ if (s.isFull()) {
+ if (reachedInCoreLimit())
+ break;
+
+ s = new Block();
+ blocks.add(s);
+ }
+
+ final int n = Math.min(Block.SZ - s.count, len);
+ System.arraycopy(b, off, s.buffer, s.count, n);
+ s.count += n;
+ len -= n;
+ off += n;
+ }
+ }
+
+ if (len > 0)
+ diskOut.write(b, off, len);
+ }
+
+ private Block last() {
+ return blocks.get(blocks.size() - 1);
+ }
+
+ private boolean reachedInCoreLimit() throws IOException {
+ if (blocks.size() * Block.SZ < inCoreLimit)
+ return false;
+
+ onDiskFile = File.createTempFile("jgit_", ".buffer");
+ diskOut = new FileOutputStream(onDiskFile);
+
+ final Block last = blocks.remove(blocks.size() - 1);
+ for (final Block b : blocks)
+ diskOut.write(b.buffer, 0, b.count);
+ blocks = null;
+
+ diskOut = new BufferedOutputStream(diskOut, Block.SZ);
+ diskOut.write(last.buffer, 0, last.count);
+ return true;
+ }
+
+ public void close() throws IOException {
+ if (diskOut != null) {
+ try {
+ diskOut.close();
+ } finally {
+ diskOut = null;
+ }
+ }
+ }
+
+ /**
+ * Obtain the length (in bytes) of the buffer.
+ * <p>
+ * The length is only accurate after {@link #close()} has been invoked.
+ *
+ * @return total length of the buffer, in bytes.
+ */
+ public long length() {
+ if (onDiskFile != null)
+ return onDiskFile.length();
+
+ final Block last = last();
+ return ((long) blocks.size()) * Block.SZ - (Block.SZ - last.count);
+ }
+
+ /**
+ * Send this buffer to an output stream.
+ * <p>
+ * This method may only be invoked after {@link #close()} has completed
+ * normally, to ensure all data is completely transferred.
+ *
+ * @param os
+ * stream to send this buffer's complete content to.
+ * @param pm
+ * if not null progress updates are sent here. Caller should
+ * initialize the task and the number of work units to
+ * <code>{@link #length()}/1024</code>.
+ * @throws IOException
+ * an error occurred reading from a temporary file on the local
+ * system, or writing to the output stream.
+ */
+ public void writeTo(final OutputStream os, ProgressMonitor pm)
+ throws IOException {
+ if (pm == null)
+ pm = new NullProgressMonitor();
+ if (blocks != null) {
+ // Everything is in core so we can stream directly to the output.
+ //
+ for (final Block b : blocks) {
+ os.write(b.buffer, 0, b.count);
+ pm.update(b.count / 1024);
+ }
+ } else {
+ // Reopen the temporary file and copy the contents.
+ //
+ final FileInputStream in = new FileInputStream(onDiskFile);
+ try {
+ int cnt;
+ final byte[] buf = new byte[Block.SZ];
+ while ((cnt = in.read(buf)) >= 0) {
+ os.write(buf, 0, cnt);
+ pm.update(cnt / 1024);
+ }
+ } finally {
+ in.close();
+ }
+ }
+ }
+
+ /** Clear this buffer so it has no data, and cannot be used again. */
+ public void destroy() {
+ blocks = null;
+
+ if (diskOut != null) {
+ try {
+ diskOut.close();
+ } catch (IOException err) {
+ // We shouldn't encounter an error closing the file.
+ } finally {
+ diskOut = null;
+ }
+ }
+
+ if (onDiskFile != null) {
+ if (!onDiskFile.delete())
+ onDiskFile.deleteOnExit();
+ onDiskFile = null;
+ }
+ }
+
+ private static class Block {
+ static final int SZ = 8 * 1024;
+
+ final byte[] buffer = new byte[SZ];
+
+ int count;
+
+ boolean isFull() {
+ return count == SZ;
+ }
+ }
+}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility
2008-06-29 7:59 ` [JGIT PATCH 15/21] Specialized byte array output stream for large files Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 17/21] Misc. documentation fixes to Base64 utility Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Robin Rosenberg
0 siblings, 2 replies; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
From: Robert Harder <rob@iharder.net>
Some transports require Base64 encoding of certain data fields, such
as the "encryption" used in HTTP basic authentication header fields.
The wise people at Sun have never included the incredibly widely used
Base64 encoding/decoding algorithms as part of the base JRE, forcing
everyone to include their own library. Fortunately for us Robert
Harder distributes one in the public domain.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/util/Base64.java | 1459 ++++++++++++++++++++
1 files changed, 1459 insertions(+), 0 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java b/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
new file mode 100644
index 0000000..b0c19b6
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
@@ -0,0 +1,1459 @@
+//
+// NOTE: The following source code is the iHarder.net public domain
+// Base64 library and is provided here as a convenience. For updates,
+// problems, questions, etc. regarding this code, please visit:
+// http://iharder.sourceforge.net/current/java/base64/
+//
+
+package org.spearce.jgit.util;
+
+
+/**
+ * Encodes and decodes to and from Base64 notation.
+ *
+ * <p>
+ * Change Log:
+ * </p>
+ * <ul>
+ * <li>v2.1 - Cleaned up javadoc comments and unused variables and methods. Added
+ * some convenience methods for reading and writing to and from files.</li>
+ * <li>v2.0.2 - Now specifies UTF-8 encoding in places where the code fails on systems
+ * with other encodings (like EBCDIC).</li>
+ * <li>v2.0.1 - Fixed an error when decoding a single byte, that is, when the
+ * encoded data was a single byte.</li>
+ * <li>v2.0 - I got rid of methods that used booleans to set options.
+ * Now everything is more consolidated and cleaner. The code now detects
+ * when data that's being decoded is gzip-compressed and will decompress it
+ * automatically. Generally things are cleaner. You'll probably have to
+ * change some method calls that you were making to support the new
+ * options format (<tt>int</tt>s that you "OR" together).</li>
+ * <li>v1.5.1 - Fixed bug when decompressing and decoding to a
+ * byte[] using <tt>decode( String s, boolean gzipCompressed )</tt>.
+ * Added the ability to "suspend" encoding in the Output Stream so
+ * you can turn on and off the encoding if you need to embed base64
+ * data in an otherwise "normal" stream (like an XML file).</li>
+ * <li>v1.5 - Output stream pases on flush() command but doesn't do anything itself.
+ * This helps when using GZIP streams.
+ * Added the ability to GZip-compress objects before encoding them.</li>
+ * <li>v1.4 - Added helper methods to read/write files.</li>
+ * <li>v1.3.6 - Fixed OutputStream.flush() so that 'position' is reset.</li>
+ * <li>v1.3.5 - Added flag to turn on and off line breaks. Fixed bug in input stream
+ * where last buffer being read, if not completely full, was not returned.</li>
+ * <li>v1.3.4 - Fixed when "improperly padded stream" error was thrown at the wrong time.</li>
+ * <li>v1.3.3 - Fixed I/O streams which were totally messed up.</li>
+ * </ul>
+ *
+ * <p>
+ * I am placing this code in the Public Domain. Do with it as you will.
+ * This software comes with no guarantees or warranties but with
+ * plenty of well-wishing instead!
+ * Please visit <a href="http://iharder.net/base64">http://iharder.net/base64</a>
+ * periodically to check for updates or to contribute improvements.
+ * </p>
+ *
+ * @author Robert Harder
+ * @author rob@iharder.net
+ * @version 2.1
+ */
+public class Base64
+{
+
+/* ******** P U B L I C F I E L D S ******** */
+
+
+ /** No options specified. Value is zero. */
+ public final static int NO_OPTIONS = 0;
+
+ /** Specify encoding. */
+ public final static int ENCODE = 1;
+
+
+ /** Specify decoding. */
+ public final static int DECODE = 0;
+
+
+ /** Specify that data should be gzip-compressed. */
+ public final static int GZIP = 2;
+
+
+ /** Don't break lines when encoding (violates strict Base64 specification) */
+ public final static int DONT_BREAK_LINES = 8;
+
+
+/* ******** P R I V A T E F I E L D S ******** */
+
+
+ /** Maximum line length (76) of Base64 output. */
+ private final static int MAX_LINE_LENGTH = 76;
+
+
+ /** The equals sign (=) as a byte. */
+ private final static byte EQUALS_SIGN = (byte)'=';
+
+
+ /** The new line character (\n) as a byte. */
+ private final static byte NEW_LINE = (byte)'\n';
+
+
+ /** Preferred encoding. */
+ private final static String PREFERRED_ENCODING = "UTF-8";
+
+
+ /** The 64 valid Base64 values. */
+ private final static byte[] ALPHABET;
+ private final static byte[] _NATIVE_ALPHABET = /* May be something funny like EBCDIC */
+ {
+ (byte)'A', (byte)'B', (byte)'C', (byte)'D', (byte)'E', (byte)'F', (byte)'G',
+ (byte)'H', (byte)'I', (byte)'J', (byte)'K', (byte)'L', (byte)'M', (byte)'N',
+ (byte)'O', (byte)'P', (byte)'Q', (byte)'R', (byte)'S', (byte)'T', (byte)'U',
+ (byte)'V', (byte)'W', (byte)'X', (byte)'Y', (byte)'Z',
+ (byte)'a', (byte)'b', (byte)'c', (byte)'d', (byte)'e', (byte)'f', (byte)'g',
+ (byte)'h', (byte)'i', (byte)'j', (byte)'k', (byte)'l', (byte)'m', (byte)'n',
+ (byte)'o', (byte)'p', (byte)'q', (byte)'r', (byte)'s', (byte)'t', (byte)'u',
+ (byte)'v', (byte)'w', (byte)'x', (byte)'y', (byte)'z',
+ (byte)'0', (byte)'1', (byte)'2', (byte)'3', (byte)'4', (byte)'5',
+ (byte)'6', (byte)'7', (byte)'8', (byte)'9', (byte)'+', (byte)'/'
+ };
+
+ /** Determine which ALPHABET to use. */
+ static
+ {
+ byte[] __bytes;
+ try
+ {
+ __bytes = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".getBytes( PREFERRED_ENCODING );
+ } // end try
+ catch (java.io.UnsupportedEncodingException use)
+ {
+ __bytes = _NATIVE_ALPHABET; // Fall back to native encoding
+ } // end catch
+ ALPHABET = __bytes;
+ } // end static
+
+
+ /**
+ * Translates a Base64 value to either its 6-bit reconstruction value
+ * or a negative number indicating some other meaning.
+ **/
+ private final static byte[] DECODABET =
+ {
+ -9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 0 - 8
+ -5,-5, // Whitespace: Tab and Linefeed
+ -9,-9, // Decimal 11 - 12
+ -5, // Whitespace: Carriage Return
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 14 - 26
+ -9,-9,-9,-9,-9, // Decimal 27 - 31
+ -5, // Whitespace: Space
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 33 - 42
+ 62, // Plus sign at decimal 43
+ -9,-9,-9, // Decimal 44 - 46
+ 63, // Slash at decimal 47
+ 52,53,54,55,56,57,58,59,60,61, // Numbers zero through nine
+ -9,-9,-9, // Decimal 58 - 60
+ -1, // Equals sign at decimal 61
+ -9,-9,-9, // Decimal 62 - 64
+ 0,1,2,3,4,5,6,7,8,9,10,11,12,13, // Letters 'A' through 'N'
+ 14,15,16,17,18,19,20,21,22,23,24,25, // Letters 'O' through 'Z'
+ -9,-9,-9,-9,-9,-9, // Decimal 91 - 96
+ 26,27,28,29,30,31,32,33,34,35,36,37,38, // Letters 'a' through 'm'
+ 39,40,41,42,43,44,45,46,47,48,49,50,51, // Letters 'n' through 'z'
+ -9,-9,-9,-9 // Decimal 123 - 126
+ /*,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 127 - 139
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 140 - 152
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 153 - 165
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 166 - 178
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 179 - 191
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 192 - 204
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 205 - 217
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 218 - 230
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9, // Decimal 231 - 243
+ -9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9,-9 // Decimal 244 - 255 */
+ };
+
+ // I think I end up not using the BAD_ENCODING indicator.
+ //private final static byte BAD_ENCODING = -9; // Indicates error in encoding
+ private final static byte WHITE_SPACE_ENC = -5; // Indicates white space in encoding
+ private final static byte EQUALS_SIGN_ENC = -1; // Indicates equals sign in encoding
+
+
+ /** Defeats instantiation. */
+ private Base64(){}
+
+
+
+/* ******** E N C O D I N G M E T H O D S ******** */
+
+
+ /**
+ * Encodes up to the first three bytes of array <var>threeBytes</var>
+ * and returns a four-byte array in Base64 notation.
+ * The actual number of significant bytes in your array is
+ * given by <var>numSigBytes</var>.
+ * The array <var>threeBytes</var> needs only be as big as
+ * <var>numSigBytes</var>.
+ * Code can reuse a byte array by passing a four-byte array as <var>b4</var>.
+ *
+ * @param b4 A reusable byte array to reduce array instantiation
+ * @param threeBytes the array to convert
+ * @param numSigBytes the number of significant bytes in your array
+ * @return four byte array in Base64 notation.
+ * @since 1.5.1
+ */
+ private static byte[] encode3to4( byte[] b4, byte[] threeBytes, int numSigBytes )
+ {
+ encode3to4( threeBytes, 0, numSigBytes, b4, 0 );
+ return b4;
+ } // end encode3to4
+
+
+ /**
+ * Encodes up to three bytes of the array <var>source</var>
+ * and writes the resulting four Base64 bytes to <var>destination</var>.
+ * The source and destination arrays can be manipulated
+ * anywhere along their length by specifying
+ * <var>srcOffset</var> and <var>destOffset</var>.
+ * This method does not check to make sure your arrays
+ * are large enough to accomodate <var>srcOffset</var> + 3 for
+ * the <var>source</var> array or <var>destOffset</var> + 4 for
+ * the <var>destination</var> array.
+ * The actual number of significant bytes in your array is
+ * given by <var>numSigBytes</var>.
+ *
+ * @param source the array to convert
+ * @param srcOffset the index where conversion begins
+ * @param numSigBytes the number of significant bytes in your array
+ * @param destination the array to hold the conversion
+ * @param destOffset the index where output will be put
+ * @return the <var>destination</var> array
+ * @since 1.3
+ */
+ private static byte[] encode3to4(
+ byte[] source, int srcOffset, int numSigBytes,
+ byte[] destination, int destOffset )
+ {
+ // 1 2 3
+ // 01234567890123456789012345678901 Bit position
+ // --------000000001111111122222222 Array position from threeBytes
+ // --------| || || || | Six bit groups to index ALPHABET
+ // >>18 >>12 >> 6 >> 0 Right shift necessary
+ // 0x3f 0x3f 0x3f Additional AND
+
+ // Create buffer with zero-padding if there are only one or two
+ // significant bytes passed in the array.
+ // We have to shift left 24 in order to flush out the 1's that appear
+ // when Java treats a value as negative that is cast from a byte to an int.
+ int inBuff = ( numSigBytes > 0 ? ((source[ srcOffset ] << 24) >>> 8) : 0 )
+ | ( numSigBytes > 1 ? ((source[ srcOffset + 1 ] << 24) >>> 16) : 0 )
+ | ( numSigBytes > 2 ? ((source[ srcOffset + 2 ] << 24) >>> 24) : 0 );
+
+ switch( numSigBytes )
+ {
+ case 3:
+ destination[ destOffset ] = ALPHABET[ (inBuff >>> 18) ];
+ destination[ destOffset + 1 ] = ALPHABET[ (inBuff >>> 12) & 0x3f ];
+ destination[ destOffset + 2 ] = ALPHABET[ (inBuff >>> 6) & 0x3f ];
+ destination[ destOffset + 3 ] = ALPHABET[ (inBuff ) & 0x3f ];
+ return destination;
+
+ case 2:
+ destination[ destOffset ] = ALPHABET[ (inBuff >>> 18) ];
+ destination[ destOffset + 1 ] = ALPHABET[ (inBuff >>> 12) & 0x3f ];
+ destination[ destOffset + 2 ] = ALPHABET[ (inBuff >>> 6) & 0x3f ];
+ destination[ destOffset + 3 ] = EQUALS_SIGN;
+ return destination;
+
+ case 1:
+ destination[ destOffset ] = ALPHABET[ (inBuff >>> 18) ];
+ destination[ destOffset + 1 ] = ALPHABET[ (inBuff >>> 12) & 0x3f ];
+ destination[ destOffset + 2 ] = EQUALS_SIGN;
+ destination[ destOffset + 3 ] = EQUALS_SIGN;
+ return destination;
+
+ default:
+ return destination;
+ } // end switch
+ } // end encode3to4
+
+
+
+ /**
+ * Serializes an object and returns the Base64-encoded
+ * version of that serialized object. If the object
+ * cannot be serialized or there is another error,
+ * the method will return <tt>null</tt>.
+ * The object is not GZip-compressed before being encoded.
+ *
+ * @param serializableObject The object to encode
+ * @return The Base64-encoded object
+ * @since 1.4
+ */
+ public static String encodeObject( java.io.Serializable serializableObject )
+ {
+ return encodeObject( serializableObject, NO_OPTIONS );
+ } // end encodeObject
+
+
+
+ /**
+ * Serializes an object and returns the Base64-encoded
+ * version of that serialized object. If the object
+ * cannot be serialized or there is another error,
+ * the method will return <tt>null</tt>.
+ * <p>
+ * Valid options:<pre>
+ * GZIP: gzip-compresses object before encoding it.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ * <p>
+ * Example: <code>encodeObject( myObj, Base64.GZIP )</code> or
+ * <p>
+ * Example: <code>encodeObject( myObj, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ * @param serializableObject The object to encode
+ * @param options Specified options
+ * @return The Base64-encoded object
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @since 2.0
+ */
+ public static String encodeObject( java.io.Serializable serializableObject, int options )
+ {
+ // Streams
+ java.io.ByteArrayOutputStream baos = null;
+ java.io.OutputStream b64os = null;
+ java.io.ObjectOutputStream oos = null;
+ java.util.zip.GZIPOutputStream gzos = null;
+
+ // Isolate options
+ int gzip = (options & GZIP);
+ int dontBreakLines = (options & DONT_BREAK_LINES);
+
+ try
+ {
+ // ObjectOutputStream -> (GZIP) -> Base64 -> ByteArrayOutputStream
+ baos = new java.io.ByteArrayOutputStream();
+ b64os = new Base64.OutputStream( baos, ENCODE | dontBreakLines );
+
+ // GZip?
+ if( gzip == GZIP )
+ {
+ gzos = new java.util.zip.GZIPOutputStream( b64os );
+ oos = new java.io.ObjectOutputStream( gzos );
+ } // end if: gzip
+ else
+ oos = new java.io.ObjectOutputStream( b64os );
+
+ oos.writeObject( serializableObject );
+ } // end try
+ catch( java.io.IOException e )
+ {
+ e.printStackTrace();
+ return null;
+ } // end catch
+ finally
+ {
+ try{ oos.close(); } catch( Exception e ){}
+ try{ gzos.close(); } catch( Exception e ){}
+ try{ b64os.close(); } catch( Exception e ){}
+ try{ baos.close(); } catch( Exception e ){}
+ } // end finally
+
+ // Return value according to relevant encoding.
+ try
+ {
+ return new String( baos.toByteArray(), PREFERRED_ENCODING );
+ } // end try
+ catch (java.io.UnsupportedEncodingException uue)
+ {
+ return new String( baos.toByteArray() );
+ } // end catch
+
+ } // end encode
+
+
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * Does not GZip-compress data.
+ *
+ * @param source The data to convert
+ * @since 1.4
+ */
+ public static String encodeBytes( byte[] source )
+ {
+ return encodeBytes( source, 0, source.length, NO_OPTIONS );
+ } // end encodeBytes
+
+
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * <p>
+ * Valid options:<pre>
+ * GZIP: gzip-compresses object before encoding it.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ *
+ * @param source The data to convert
+ * @param options Specified options
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @since 2.0
+ */
+ public static String encodeBytes( byte[] source, int options )
+ {
+ return encodeBytes( source, 0, source.length, options );
+ } // end encodeBytes
+
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * Does not GZip-compress data.
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @since 1.4
+ */
+ public static String encodeBytes( byte[] source, int off, int len )
+ {
+ return encodeBytes( source, off, len, NO_OPTIONS );
+ } // end encodeBytes
+
+
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * <p>
+ * Valid options:<pre>
+ * GZIP: gzip-compresses object before encoding it.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @param options Specified options
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @since 2.0
+ */
+ public static String encodeBytes( byte[] source, int off, int len, int options )
+ {
+ // Isolate options
+ int dontBreakLines = ( options & DONT_BREAK_LINES );
+ int gzip = ( options & GZIP );
+
+ // Compress?
+ if( gzip == GZIP )
+ {
+ java.io.ByteArrayOutputStream baos = null;
+ java.util.zip.GZIPOutputStream gzos = null;
+ Base64.OutputStream b64os = null;
+
+
+ try
+ {
+ // GZip -> Base64 -> ByteArray
+ baos = new java.io.ByteArrayOutputStream();
+ b64os = new Base64.OutputStream( baos, ENCODE | dontBreakLines );
+ gzos = new java.util.zip.GZIPOutputStream( b64os );
+
+ gzos.write( source, off, len );
+ gzos.close();
+ } // end try
+ catch( java.io.IOException e )
+ {
+ e.printStackTrace();
+ return null;
+ } // end catch
+ finally
+ {
+ try{ gzos.close(); } catch( Exception e ){}
+ try{ b64os.close(); } catch( Exception e ){}
+ try{ baos.close(); } catch( Exception e ){}
+ } // end finally
+
+ // Return value according to relevant encoding.
+ try
+ {
+ return new String( baos.toByteArray(), PREFERRED_ENCODING );
+ } // end try
+ catch (java.io.UnsupportedEncodingException uue)
+ {
+ return new String( baos.toByteArray() );
+ } // end catch
+ } // end if: compress
+
+ // Else, don't compress. Better not to use streams at all then.
+ else
+ {
+ // Convert option to boolean in way that code likes it.
+ boolean breakLines = dontBreakLines == 0;
+
+ int len43 = len * 4 / 3;
+ byte[] outBuff = new byte[ ( len43 ) // Main 4:3
+ + ( (len % 3) > 0 ? 4 : 0 ) // Account for padding
+ + (breakLines ? ( len43 / MAX_LINE_LENGTH ) : 0) ]; // New lines
+ int d = 0;
+ int e = 0;
+ int len2 = len - 2;
+ int lineLength = 0;
+ for( ; d < len2; d+=3, e+=4 )
+ {
+ encode3to4( source, d+off, 3, outBuff, e );
+
+ lineLength += 4;
+ if( breakLines && lineLength == MAX_LINE_LENGTH )
+ {
+ outBuff[e+4] = NEW_LINE;
+ e++;
+ lineLength = 0;
+ } // end if: end of line
+ } // en dfor: each piece of array
+
+ if( d < len )
+ {
+ encode3to4( source, d+off, len - d, outBuff, e );
+ e += 4;
+ } // end if: some padding needed
+
+
+ // Return value according to relevant encoding.
+ try
+ {
+ return new String( outBuff, 0, e, PREFERRED_ENCODING );
+ } // end try
+ catch (java.io.UnsupportedEncodingException uue)
+ {
+ return new String( outBuff, 0, e );
+ } // end catch
+
+ } // end else: don't compress
+
+ } // end encodeBytes
+
+
+
+
+
+/* ******** D E C O D I N G M E T H O D S ******** */
+
+
+ /**
+ * Decodes four bytes from array <var>source</var>
+ * and writes the resulting bytes (up to three of them)
+ * to <var>destination</var>.
+ * The source and destination arrays can be manipulated
+ * anywhere along their length by specifying
+ * <var>srcOffset</var> and <var>destOffset</var>.
+ * This method does not check to make sure your arrays
+ * are large enough to accomodate <var>srcOffset</var> + 4 for
+ * the <var>source</var> array or <var>destOffset</var> + 3 for
+ * the <var>destination</var> array.
+ * This method returns the actual number of bytes that
+ * were converted from the Base64 encoding.
+ *
+ *
+ * @param source the array to convert
+ * @param srcOffset the index where conversion begins
+ * @param destination the array to hold the conversion
+ * @param destOffset the index where output will be put
+ * @return the number of decoded bytes converted
+ * @since 1.3
+ */
+ private static int decode4to3( byte[] source, int srcOffset, byte[] destination, int destOffset )
+ {
+ // Example: Dk==
+ if( source[ srcOffset + 2] == EQUALS_SIGN )
+ {
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1] ] << 24 ) >>> 12 );
+ int outBuff = ( ( DECODABET[ source[ srcOffset ] ] & 0xFF ) << 18 )
+ | ( ( DECODABET[ source[ srcOffset + 1] ] & 0xFF ) << 12 );
+
+ destination[ destOffset ] = (byte)( outBuff >>> 16 );
+ return 1;
+ }
+
+ // Example: DkL=
+ else if( source[ srcOffset + 3 ] == EQUALS_SIGN )
+ {
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 );
+ int outBuff = ( ( DECODABET[ source[ srcOffset ] ] & 0xFF ) << 18 )
+ | ( ( DECODABET[ source[ srcOffset + 1 ] ] & 0xFF ) << 12 )
+ | ( ( DECODABET[ source[ srcOffset + 2 ] ] & 0xFF ) << 6 );
+
+ destination[ destOffset ] = (byte)( outBuff >>> 16 );
+ destination[ destOffset + 1 ] = (byte)( outBuff >>> 8 );
+ return 2;
+ }
+
+ // Example: DkLE
+ else
+ {
+ try{
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 )
+ // | ( ( DECODABET[ source[ srcOffset + 3 ] ] << 24 ) >>> 24 );
+ int outBuff = ( ( DECODABET[ source[ srcOffset ] ] & 0xFF ) << 18 )
+ | ( ( DECODABET[ source[ srcOffset + 1 ] ] & 0xFF ) << 12 )
+ | ( ( DECODABET[ source[ srcOffset + 2 ] ] & 0xFF ) << 6)
+ | ( ( DECODABET[ source[ srcOffset + 3 ] ] & 0xFF ) );
+
+
+ destination[ destOffset ] = (byte)( outBuff >> 16 );
+ destination[ destOffset + 1 ] = (byte)( outBuff >> 8 );
+ destination[ destOffset + 2 ] = (byte)( outBuff );
+
+ return 3;
+ }catch( Exception e){
+ System.out.println(""+source[srcOffset]+ ": " + ( DECODABET[ source[ srcOffset ] ] ) );
+ System.out.println(""+source[srcOffset+1]+ ": " + ( DECODABET[ source[ srcOffset + 1 ] ] ) );
+ System.out.println(""+source[srcOffset+2]+ ": " + ( DECODABET[ source[ srcOffset + 2 ] ] ) );
+ System.out.println(""+source[srcOffset+3]+ ": " + ( DECODABET[ source[ srcOffset + 3 ] ] ) );
+ return -1;
+ } //e nd catch
+ }
+ } // end decodeToBytes
+
+
+
+
+ /**
+ * Very low-level access to decoding ASCII characters in
+ * the form of a byte array. Does not support automatically
+ * gunzipping or any other "fancy" features.
+ *
+ * @param source The Base64 encoded data
+ * @param off The offset of where to begin decoding
+ * @param len The length of characters to decode
+ * @return decoded data
+ * @since 1.3
+ */
+ public static byte[] decode( byte[] source, int off, int len )
+ {
+ int len34 = len * 3 / 4;
+ byte[] outBuff = new byte[ len34 ]; // Upper limit on size of output
+ int outBuffPosn = 0;
+
+ byte[] b4 = new byte[4];
+ int b4Posn = 0;
+ int i = 0;
+ byte sbiCrop = 0;
+ byte sbiDecode = 0;
+ for( i = off; i < off+len; i++ )
+ {
+ sbiCrop = (byte)(source[i] & 0x7f); // Only the low seven bits
+ sbiDecode = DECODABET[ sbiCrop ];
+
+ if( sbiDecode >= WHITE_SPACE_ENC ) // White space, Equals sign or better
+ {
+ if( sbiDecode >= EQUALS_SIGN_ENC )
+ {
+ b4[ b4Posn++ ] = sbiCrop;
+ if( b4Posn > 3 )
+ {
+ outBuffPosn += decode4to3( b4, 0, outBuff, outBuffPosn );
+ b4Posn = 0;
+
+ // If that was the equals sign, break out of 'for' loop
+ if( sbiCrop == EQUALS_SIGN )
+ break;
+ } // end if: quartet built
+
+ } // end if: equals sign or better
+
+ } // end if: white space, equals sign or better
+ else
+ {
+ System.err.println( "Bad Base64 input character at " + i + ": " + source[i] + "(decimal)" );
+ return null;
+ } // end else:
+ } // each input character
+
+ byte[] out = new byte[ outBuffPosn ];
+ System.arraycopy( outBuff, 0, out, 0, outBuffPosn );
+ return out;
+ } // end decode
+
+
+
+
+ /**
+ * Decodes data from Base64 notation, automatically
+ * detecting gzip-compressed data and decompressing it.
+ *
+ * @param s the string to decode
+ * @return the decoded data
+ * @since 1.4
+ */
+ public static byte[] decode( String s )
+ {
+ byte[] bytes;
+ try
+ {
+ bytes = s.getBytes( PREFERRED_ENCODING );
+ } // end try
+ catch( java.io.UnsupportedEncodingException uee )
+ {
+ bytes = s.getBytes();
+ } // end catch
+ //</change>
+
+ // Decode
+ bytes = decode( bytes, 0, bytes.length );
+
+
+ // Check to see if it's gzip-compressed
+ // GZIP Magic Two-Byte Number: 0x8b1f (35615)
+ if( bytes != null && bytes.length >= 4 )
+ {
+
+ int head = ((int)bytes[0] & 0xff) | ((bytes[1] << 8) & 0xff00);
+ if( java.util.zip.GZIPInputStream.GZIP_MAGIC == head )
+ {
+ java.io.ByteArrayInputStream bais = null;
+ java.util.zip.GZIPInputStream gzis = null;
+ java.io.ByteArrayOutputStream baos = null;
+ byte[] buffer = new byte[2048];
+ int length = 0;
+
+ try
+ {
+ baos = new java.io.ByteArrayOutputStream();
+ bais = new java.io.ByteArrayInputStream( bytes );
+ gzis = new java.util.zip.GZIPInputStream( bais );
+
+ while( ( length = gzis.read( buffer ) ) >= 0 )
+ {
+ baos.write(buffer,0,length);
+ } // end while: reading input
+
+ // No error? Get new bytes.
+ bytes = baos.toByteArray();
+
+ } // end try
+ catch( java.io.IOException e )
+ {
+ // Just return originally-decoded bytes
+ } // end catch
+ finally
+ {
+ try{ baos.close(); } catch( Exception e ){}
+ try{ gzis.close(); } catch( Exception e ){}
+ try{ bais.close(); } catch( Exception e ){}
+ } // end finally
+
+ } // end if: gzipped
+ } // end if: bytes.length >= 2
+
+ return bytes;
+ } // end decode
+
+
+
+
+ /**
+ * Attempts to decode Base64 data and deserialize a Java
+ * Object within. Returns <tt>null</tt> if there was an error.
+ *
+ * @param encodedObject The Base64 data to decode
+ * @return The decoded and deserialized object
+ * @since 1.5
+ */
+ public static Object decodeToObject( String encodedObject )
+ {
+ // Decode and gunzip if necessary
+ byte[] objBytes = decode( encodedObject );
+
+ java.io.ByteArrayInputStream bais = null;
+ java.io.ObjectInputStream ois = null;
+ Object obj = null;
+
+ try
+ {
+ bais = new java.io.ByteArrayInputStream( objBytes );
+ ois = new java.io.ObjectInputStream( bais );
+
+ obj = ois.readObject();
+ } // end try
+ catch( java.io.IOException e )
+ {
+ e.printStackTrace();
+ obj = null;
+ } // end catch
+ catch( java.lang.ClassNotFoundException e )
+ {
+ e.printStackTrace();
+ obj = null;
+ } // end catch
+ finally
+ {
+ try{ bais.close(); } catch( Exception e ){}
+ try{ ois.close(); } catch( Exception e ){}
+ } // end finally
+
+ return obj;
+ } // end decodeObject
+
+
+
+ /**
+ * Convenience method for encoding data to a file.
+ *
+ * @param dataToEncode byte array of data to encode in base64 form
+ * @param filename Filename for saving encoded data
+ * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+ *
+ * @since 2.1
+ */
+ public static boolean encodeToFile( byte[] dataToEncode, String filename )
+ {
+ boolean success = false;
+ Base64.OutputStream bos = null;
+ try
+ {
+ bos = new Base64.OutputStream(
+ new java.io.FileOutputStream( filename ), Base64.ENCODE );
+ bos.write( dataToEncode );
+ success = true;
+ } // end try
+ catch( java.io.IOException e )
+ {
+
+ success = false;
+ } // end catch: IOException
+ finally
+ {
+ try{ bos.close(); } catch( Exception e ){}
+ } // end finally
+
+ return success;
+ } // end encodeToFile
+
+
+ /**
+ * Convenience method for decoding data to a file.
+ *
+ * @param dataToDecode Base64-encoded data as a string
+ * @param filename Filename for saving decoded data
+ * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+ *
+ * @since 2.1
+ */
+ public static boolean decodeToFile( String dataToDecode, String filename )
+ {
+ boolean success = false;
+ Base64.OutputStream bos = null;
+ try
+ {
+ bos = new Base64.OutputStream(
+ new java.io.FileOutputStream( filename ), Base64.DECODE );
+ bos.write( dataToDecode.getBytes( PREFERRED_ENCODING ) );
+ success = true;
+ } // end try
+ catch( java.io.IOException e )
+ {
+ success = false;
+ } // end catch: IOException
+ finally
+ {
+ try{ bos.close(); } catch( Exception e ){}
+ } // end finally
+
+ return success;
+ } // end decodeToFile
+
+
+
+
+ /**
+ * Convenience method for reading a base64-encoded
+ * file and decoding it.
+ *
+ * @param filename Filename for reading encoded data
+ * @return decoded byte array or null if unsuccessful
+ *
+ * @since 2.1
+ */
+ public static byte[] decodeFromFile( String filename )
+ {
+ byte[] decodedData = null;
+ Base64.InputStream bis = null;
+ try
+ {
+ // Set up some useful variables
+ java.io.File file = new java.io.File( filename );
+ byte[] buffer = null;
+ int length = 0;
+ int numBytes = 0;
+
+ // Check for size of file
+ if( file.length() > Integer.MAX_VALUE )
+ {
+ System.err.println( "File is too big for this convenience method (" + file.length() + " bytes)." );
+ return null;
+ } // end if: file too big for int index
+ buffer = new byte[ (int)file.length() ];
+
+ // Open a stream
+ bis = new Base64.InputStream(
+ new java.io.BufferedInputStream(
+ new java.io.FileInputStream( file ) ), Base64.DECODE );
+
+ // Read until done
+ while( ( numBytes = bis.read( buffer, length, 4096 ) ) >= 0 )
+ length += numBytes;
+
+ // Save in a variable to return
+ decodedData = new byte[ length ];
+ System.arraycopy( buffer, 0, decodedData, 0, length );
+
+ } // end try
+ catch( java.io.IOException e )
+ {
+ System.err.println( "Error decoding from file " + filename );
+ } // end catch: IOException
+ finally
+ {
+ try{ bis.close(); } catch( Exception e) {}
+ } // end finally
+
+ return decodedData;
+ } // end decodeFromFile
+
+
+
+ /**
+ * Convenience method for reading a binary file
+ * and base64-encoding it.
+ *
+ * @param filename Filename for reading binary data
+ * @return base64-encoded string or null if unsuccessful
+ *
+ * @since 2.1
+ */
+ public static String encodeFromFile( String filename )
+ {
+ String encodedData = null;
+ Base64.InputStream bis = null;
+ try
+ {
+ // Set up some useful variables
+ java.io.File file = new java.io.File( filename );
+ byte[] buffer = new byte[ (int)(file.length() * 1.4) ];
+ int length = 0;
+ int numBytes = 0;
+
+ // Open a stream
+ bis = new Base64.InputStream(
+ new java.io.BufferedInputStream(
+ new java.io.FileInputStream( file ) ), Base64.ENCODE );
+
+ // Read until done
+ while( ( numBytes = bis.read( buffer, length, 4096 ) ) >= 0 )
+ length += numBytes;
+
+ // Save in a variable to return
+ encodedData = new String( buffer, 0, length, Base64.PREFERRED_ENCODING );
+
+ } // end try
+ catch( java.io.IOException e )
+ {
+ System.err.println( "Error encoding from file " + filename );
+ } // end catch: IOException
+ finally
+ {
+ try{ bis.close(); } catch( Exception e) {}
+ } // end finally
+
+ return encodedData;
+ } // end encodeFromFile
+
+
+
+
+ /* ******** I N N E R C L A S S I N P U T S T R E A M ******** */
+
+
+
+ /**
+ * A {@link Base64.InputStream} will read data from another
+ * <tt>java.io.InputStream</tt>, given in the constructor,
+ * and encode/decode to/from Base64 notation on the fly.
+ *
+ * @see Base64
+ * @since 1.3
+ */
+ public static class InputStream extends java.io.FilterInputStream
+ {
+ private boolean encode; // Encoding or decoding
+ private int position; // Current position in the buffer
+ private byte[] buffer; // Small buffer holding converted data
+ private int bufferLength; // Length of buffer (3 or 4)
+ private int numSigBytes; // Number of meaningful bytes in the buffer
+ private int lineLength;
+ private boolean breakLines; // Break lines at less than 80 characters
+
+
+ /**
+ * Constructs a {@link Base64.InputStream} in DECODE mode.
+ *
+ * @param in the <tt>java.io.InputStream</tt> from which to read data.
+ * @since 1.3
+ */
+ public InputStream( java.io.InputStream in )
+ {
+ this( in, DECODE );
+ } // end constructor
+
+
+ /**
+ * Constructs a {@link Base64.InputStream} in
+ * either ENCODE or DECODE mode.
+ * <p>
+ * Valid options:<pre>
+ * ENCODE or DECODE: Encode or Decode as data is read.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * (only meaningful when encoding)
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ * <p>
+ * Example: <code>new Base64.InputStream( in, Base64.DECODE )</code>
+ *
+ *
+ * @param in the <tt>java.io.InputStream</tt> from which to read data.
+ * @param options Specified options
+ * @see Base64#ENCODE
+ * @see Base64#DECODE
+ * @see Base64#DONT_BREAK_LINES
+ * @since 2.0
+ */
+ public InputStream( java.io.InputStream in, int options )
+ {
+ super( in );
+ this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+ this.encode = (options & ENCODE) == ENCODE;
+ this.bufferLength = encode ? 4 : 3;
+ this.buffer = new byte[ bufferLength ];
+ this.position = -1;
+ this.lineLength = 0;
+ } // end constructor
+
+ /**
+ * Reads enough of the input stream to convert
+ * to/from Base64 and returns the next byte.
+ *
+ * @return next byte
+ * @since 1.3
+ */
+ public int read() throws java.io.IOException
+ {
+ // Do we need to get data?
+ if( position < 0 )
+ {
+ if( encode )
+ {
+ byte[] b3 = new byte[3];
+ int numBinaryBytes = 0;
+ for( int i = 0; i < 3; i++ )
+ {
+ try
+ {
+ int b = in.read();
+
+ // If end of stream, b is -1.
+ if( b >= 0 )
+ {
+ b3[i] = (byte)b;
+ numBinaryBytes++;
+ } // end if: not end of stream
+
+ } // end try: read
+ catch( java.io.IOException e )
+ {
+ // Only a problem if we got no data at all.
+ if( i == 0 )
+ throw e;
+
+ } // end catch
+ } // end for: each needed input byte
+
+ if( numBinaryBytes > 0 )
+ {
+ encode3to4( b3, 0, numBinaryBytes, buffer, 0 );
+ position = 0;
+ numSigBytes = 4;
+ } // end if: got data
+ else
+ {
+ return -1;
+ } // end else
+ } // end if: encoding
+
+ // Else decoding
+ else
+ {
+ byte[] b4 = new byte[4];
+ int i = 0;
+ for( i = 0; i < 4; i++ )
+ {
+ // Read four "meaningful" bytes:
+ int b = 0;
+ do{ b = in.read(); }
+ while( b >= 0 && DECODABET[ b & 0x7f ] <= WHITE_SPACE_ENC );
+
+ if( b < 0 )
+ break; // Reads a -1 if end of stream
+
+ b4[i] = (byte)b;
+ } // end for: each needed input byte
+
+ if( i == 4 )
+ {
+ numSigBytes = decode4to3( b4, 0, buffer, 0 );
+ position = 0;
+ } // end if: got four characters
+ else if( i == 0 ){
+ return -1;
+ } // end else if: also padded correctly
+ else
+ {
+ // Must have broken out from above.
+ throw new java.io.IOException( "Improperly padded Base64 input." );
+ } // end
+
+ } // end else: decode
+ } // end else: get data
+
+ // Got data?
+ if( position >= 0 )
+ {
+ // End of relevant data?
+ if( /*!encode &&*/ position >= numSigBytes )
+ return -1;
+
+ if( encode && breakLines && lineLength >= MAX_LINE_LENGTH )
+ {
+ lineLength = 0;
+ return '\n';
+ } // end if
+ else
+ {
+ lineLength++; // This isn't important when decoding
+ // but throwing an extra "if" seems
+ // just as wasteful.
+
+ int b = buffer[ position++ ];
+
+ if( position >= bufferLength )
+ position = -1;
+
+ return b & 0xFF; // This is how you "cast" a byte that's
+ // intended to be unsigned.
+ } // end else
+ } // end if: position >= 0
+
+ // Else error
+ else
+ {
+ // When JDK1.4 is more accepted, use an assertion here.
+ throw new java.io.IOException( "Error in Base64 code reading stream." );
+ } // end else
+ } // end read
+
+
+ /**
+ * Calls {@link #read()} repeatedly until the end of stream
+ * is reached or <var>len</var> bytes are read.
+ * Returns number of bytes read into array or -1 if
+ * end of stream is encountered.
+ *
+ * @param dest array to hold values
+ * @param off offset for array
+ * @param len max number of bytes to read into array
+ * @return bytes read into array or -1 if end of stream is encountered.
+ * @since 1.3
+ */
+ public int read( byte[] dest, int off, int len ) throws java.io.IOException
+ {
+ int i;
+ int b;
+ for( i = 0; i < len; i++ )
+ {
+ b = read();
+
+ //if( b < 0 && i == 0 )
+ // return -1;
+
+ if( b >= 0 )
+ dest[off + i] = (byte)b;
+ else if( i == 0 )
+ return -1;
+ else
+ break; // Out of 'for' loop
+ } // end for: each byte read
+ return i;
+ } // end read
+
+ } // end inner class InputStream
+
+
+
+
+
+
+ /* ******** I N N E R C L A S S O U T P U T S T R E A M ******** */
+
+
+
+ /**
+ * A {@link Base64.OutputStream} will write data to another
+ * <tt>java.io.OutputStream</tt>, given in the constructor,
+ * and encode/decode to/from Base64 notation on the fly.
+ *
+ * @see Base64
+ * @since 1.3
+ */
+ public static class OutputStream extends java.io.FilterOutputStream
+ {
+ private boolean encode;
+ private int position;
+ private byte[] buffer;
+ private int bufferLength;
+ private int lineLength;
+ private boolean breakLines;
+ private byte[] b4; // Scratch used in a few places
+ private boolean suspendEncoding;
+
+ /**
+ * Constructs a {@link Base64.OutputStream} in ENCODE mode.
+ *
+ * @param out the <tt>java.io.OutputStream</tt> to which data will be written.
+ * @since 1.3
+ */
+ public OutputStream( java.io.OutputStream out )
+ {
+ this( out, ENCODE );
+ } // end constructor
+
+
+ /**
+ * Constructs a {@link Base64.OutputStream} in
+ * either ENCODE or DECODE mode.
+ * <p>
+ * Valid options:<pre>
+ * ENCODE or DECODE: Encode or Decode as data is read.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * (only meaningful when encoding)
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ * <p>
+ * Example: <code>new Base64.OutputStream( out, Base64.ENCODE )</code>
+ *
+ * @param out the <tt>java.io.OutputStream</tt> to which data will be written.
+ * @param options Specified options.
+ * @see Base64#ENCODE
+ * @see Base64#DECODE
+ * @see Base64#DONT_BREAK_LINES
+ * @since 1.3
+ */
+ public OutputStream( java.io.OutputStream out, int options )
+ {
+ super( out );
+ this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+ this.encode = (options & ENCODE) == ENCODE;
+ this.bufferLength = encode ? 3 : 4;
+ this.buffer = new byte[ bufferLength ];
+ this.position = 0;
+ this.lineLength = 0;
+ this.suspendEncoding = false;
+ this.b4 = new byte[4];
+ } // end constructor
+
+
+ /**
+ * Writes the byte to the output stream after
+ * converting to/from Base64 notation.
+ * When encoding, bytes are buffered three
+ * at a time before the output stream actually
+ * gets a write() call.
+ * When decoding, bytes are buffered four
+ * at a time.
+ *
+ * @param theByte the byte to write
+ * @since 1.3
+ */
+ public void write(int theByte) throws java.io.IOException
+ {
+ // Encoding suspended?
+ if( suspendEncoding )
+ {
+ super.out.write( theByte );
+ return;
+ } // end if: supsended
+
+ // Encode?
+ if( encode )
+ {
+ buffer[ position++ ] = (byte)theByte;
+ if( position >= bufferLength ) // Enough to encode.
+ {
+ out.write( encode3to4( b4, buffer, bufferLength ) );
+
+ lineLength += 4;
+ if( breakLines && lineLength >= MAX_LINE_LENGTH )
+ {
+ out.write( NEW_LINE );
+ lineLength = 0;
+ } // end if: end of line
+
+ position = 0;
+ } // end if: enough to output
+ } // end if: encoding
+
+ // Else, Decoding
+ else
+ {
+ // Meaningful Base64 character?
+ if( DECODABET[ theByte & 0x7f ] > WHITE_SPACE_ENC )
+ {
+ buffer[ position++ ] = (byte)theByte;
+ if( position >= bufferLength ) // Enough to output.
+ {
+ int len = Base64.decode4to3( buffer, 0, b4, 0 );
+ out.write( b4, 0, len );
+ //out.write( Base64.decode4to3( buffer ) );
+ position = 0;
+ } // end if: enough to output
+ } // end if: meaningful base64 character
+ else if( DECODABET[ theByte & 0x7f ] != WHITE_SPACE_ENC )
+ {
+ throw new java.io.IOException( "Invalid character in Base64 data." );
+ } // end else: not white space either
+ } // end else: decoding
+ } // end write
+
+
+
+ /**
+ * Calls {@link #write(int)} repeatedly until <var>len</var>
+ * bytes are written.
+ *
+ * @param theBytes array from which to read bytes
+ * @param off offset for array
+ * @param len max number of bytes to read into array
+ * @since 1.3
+ */
+ public void write( byte[] theBytes, int off, int len ) throws java.io.IOException
+ {
+ // Encoding suspended?
+ if( suspendEncoding )
+ {
+ super.out.write( theBytes, off, len );
+ return;
+ } // end if: supsended
+
+ for( int i = 0; i < len; i++ )
+ {
+ write( theBytes[ off + i ] );
+ } // end for: each byte written
+
+ } // end write
+
+
+
+ /**
+ * Method added by PHIL. [Thanks, PHIL. -Rob]
+ * This pads the buffer without closing the stream.
+ */
+ public void flushBase64() throws java.io.IOException
+ {
+ if( position > 0 )
+ {
+ if( encode )
+ {
+ out.write( encode3to4( b4, buffer, position ) );
+ position = 0;
+ } // end if: encoding
+ else
+ {
+ throw new java.io.IOException( "Base64 input not properly padded." );
+ } // end else: decoding
+ } // end if: buffer partially full
+
+ } // end flush
+
+
+ /**
+ * Flushes and closes (I think, in the superclass) the stream.
+ *
+ * @since 1.3
+ */
+ public void close() throws java.io.IOException
+ {
+ // 1. Ensure that pending characters are written
+ flushBase64();
+
+ // 2. Actually close the stream
+ // Base class both flushes and closes.
+ super.close();
+
+ buffer = null;
+ out = null;
+ } // end close
+
+
+
+ /**
+ * Suspends encoding of the stream.
+ * May be helpful if you need to embed a piece of
+ * base640-encoded data in a stream.
+ *
+ * @since 1.5.1
+ */
+ public void suspendEncoding() throws java.io.IOException
+ {
+ flushBase64();
+ this.suspendEncoding = true;
+ } // end suspendEncoding
+
+
+ /**
+ * Resumes encoding of the stream.
+ * May be helpful if you need to embed a piece of
+ * base640-encoded data in a stream.
+ *
+ * @since 1.5.1
+ */
+ public void resumeEncoding()
+ {
+ this.suspendEncoding = false;
+ } // end resumeEncoding
+
+
+
+ } // end inner class OutputStream
+
+
+} // end class Base64
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 17/21] Misc. documentation fixes to Base64 utility
2008-06-29 7:59 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 18/21] Extract the basic HTTP proxy support to its own class Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Robin Rosenberg
1 sibling, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
These fixes silence warnings in the utility which are triggered
by our (rather pedantic) Eclipse compiler settings.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/util/Base64.java | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java b/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
index b0c19b6..9254bd0 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/util/Base64.java
@@ -378,6 +378,7 @@ public class Base64
* Does not GZip-compress data.
*
* @param source The data to convert
+ * @return encoded base64 representation of source.
* @since 1.4
*/
public static String encodeBytes( byte[] source )
@@ -403,6 +404,7 @@ public class Base64
*
* @param source The data to convert
* @param options Specified options
+ * @return encoded base64 representation of source.
* @see Base64#GZIP
* @see Base64#DONT_BREAK_LINES
* @since 2.0
@@ -420,6 +422,7 @@ public class Base64
* @param source The data to convert
* @param off Offset in array where conversion should begin
* @param len Length of data to convert
+ * @return encoded base64 representation of source.
* @since 1.4
*/
public static String encodeBytes( byte[] source, int off, int len )
@@ -447,6 +450,7 @@ public class Base64
* @param off Offset in array where conversion should begin
* @param len Length of data to convert
* @param options Specified options
+ * @return encoded base64 representation of source.
* @see Base64#GZIP
* @see Base64#DONT_BREAK_LINES
* @since 2.0
@@ -729,7 +733,7 @@ public class Base64
if( bytes != null && bytes.length >= 4 )
{
- int head = ((int)bytes[0] & 0xff) | ((bytes[1] << 8) & 0xff00);
+ int head = (bytes[0] & 0xff) | ((bytes[1] << 8) & 0xff00);
if( java.util.zip.GZIPInputStream.GZIP_MAGIC == head )
{
java.io.ByteArrayInputStream bais = null;
@@ -1386,6 +1390,7 @@ public class Base64
/**
* Method added by PHIL. [Thanks, PHIL. -Rob]
* This pads the buffer without closing the stream.
+ * @throws java.io.IOException input was not properly padded.
*/
public void flushBase64() throws java.io.IOException
{
@@ -1430,6 +1435,7 @@ public class Base64
* May be helpful if you need to embed a piece of
* base640-encoded data in a stream.
*
+ * @throws java.io.IOException input was not properly padded.
* @since 1.5.1
*/
public void suspendEncoding() throws java.io.IOException
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 18/21] Extract the basic HTTP proxy support to its own class
2008-06-29 7:59 ` [JGIT PATCH 17/21] Misc. documentation fixes to Base64 utility Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 19/21] Create a really simple Amazon S3 REST client Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
This way the proxy can be initialized from locations other than
jgit's Main, such as from Eclipse plugin initialization or other
command line tools that wrap jgit.
We also moved the proxy lookup code to the utility class as the
error handling is several lines of code and may be shared.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/pgm/Main.java | 36 +----
.../org/spearce/jgit/transport/TransportHttp.java | 36 ++---
.../src/org/spearce/jgit/util/HttpSupport.java | 165 ++++++++++++++++++++
3 files changed, 182 insertions(+), 55 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/util/HttpSupport.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/pgm/Main.java b/org.spearce.jgit/src/org/spearce/jgit/pgm/Main.java
index 8afd0d7..3d507c6 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/pgm/Main.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/pgm/Main.java
@@ -39,13 +39,12 @@
package org.spearce.jgit.pgm;
import java.io.File;
-import java.net.MalformedURLException;
-import java.net.URL;
import java.util.Arrays;
import org.spearce.jgit.awtui.AwtAuthenticator;
import org.spearce.jgit.errors.TransportException;
import org.spearce.jgit.lib.Repository;
+import org.spearce.jgit.util.HttpSupport;
/** Command line entry point. */
public class Main {
@@ -60,7 +59,7 @@ public class Main {
public static void main(final String[] argv) {
try {
AwtAuthenticator.install();
- configureHttpProxy();
+ HttpSupport.configureHttpProxy();
execute(argv);
} catch (Die err) {
System.err.println("fatal: " + err.getMessage());
@@ -170,35 +169,4 @@ public class Main {
System.err.println("jgit [--git-dir=path] cmd ...");
System.exit(1);
}
-
- private static void configureHttpProxy() {
- final String s = System.getenv("http_proxy");
- if (s == null || s.equals(""))
- return;
-
- final URL u;
- try {
- u = new URL(s);
- } catch (MalformedURLException e) {
- throw new Die("Invalid http_proxy: " + s + ": " + e.getMessage());
- }
- if (!"http".equals(u.getProtocol()))
- throw new Die("Invalid http_proxy: " + s + ": Only http supported.");
-
- final String proxyHost = u.getHost();
- final int proxyPort = u.getPort();
-
- System.setProperty("http.proxyHost", proxyHost);
- if (proxyPort > 0)
- System.setProperty("http.proxyPort", String.valueOf(proxyPort));
-
- final String userpass = u.getUserInfo();
- if (userpass != null && userpass.contains(":")) {
- final int c = userpass.indexOf(':');
- final String user = userpass.substring(0, c);
- final String pass = userpass.substring(c + 1);
- AwtAuthenticator.add(new AwtAuthenticator.CachedAuthentication(
- proxyHost, proxyPort, user, pass));
- }
- }
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
index 2f28f2c..9351a12 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportHttp.java
@@ -41,13 +41,11 @@ import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
-import java.net.ConnectException;
+import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.Proxy;
import java.net.ProxySelector;
-import java.net.URISyntaxException;
import java.net.URL;
-import java.net.URLConnection;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Map;
@@ -59,6 +57,7 @@ import org.spearce.jgit.errors.TransportException;
import org.spearce.jgit.lib.ObjectId;
import org.spearce.jgit.lib.Ref;
import org.spearce.jgit.lib.Repository;
+import org.spearce.jgit.util.HttpSupport;
/**
* Transport over the non-Git aware HTTP and FTP protocol.
@@ -107,10 +106,6 @@ class TransportHttp extends WalkTransport {
return r;
}
- Proxy proxyFor(final URL u) throws URISyntaxException {
- return proxySelector.select(u.toURI()).get(0);
- }
-
class HttpObjectDB extends WalkRemoteObjectDatabase {
private final URL objectsUrl;
@@ -172,23 +167,22 @@ class TransportHttp extends WalkTransport {
@Override
FileStream open(final String path) throws IOException {
final URL base = objectsUrl;
- try {
- final URL u = new URL(base, path);
- final URLConnection c = u.openConnection(proxyFor(u));
+ final URL u = new URL(base, path);
+ final Proxy proxy = HttpSupport.proxyFor(proxySelector, u);
+ final HttpURLConnection c;
+
+ c = (HttpURLConnection) u.openConnection(proxy);
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_OK:
final InputStream in = c.getInputStream();
final int len = c.getContentLength();
return new FileStream(in, len);
- } catch (ConnectException ce) {
- // The standard J2SE error message is not very useful.
- //
- if ("Connection timed out: connect".equals(ce.getMessage()))
- throw new ConnectException("Connection timed out: " + base);
- throw new ConnectException(ce.getMessage() + " " + base);
- } catch (URISyntaxException e) {
- final ConnectException err;
- err = new ConnectException("Cannot determine proxy for " + base);
- err.initCause(e);
- throw err;
+ case HttpURLConnection.HTTP_NOT_FOUND:
+ throw new FileNotFoundException(u.toString());
+ default:
+ throw new IOException(u.toString() + ": "
+ + HttpSupport.response(c) + " "
+ + c.getResponseMessage());
}
}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/util/HttpSupport.java b/org.spearce.jgit/src/org/spearce/jgit/util/HttpSupport.java
new file mode 100644
index 0000000..29b4d8e
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/util/HttpSupport.java
@@ -0,0 +1,165 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.util;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.net.ConnectException;
+import java.net.HttpURLConnection;
+import java.net.MalformedURLException;
+import java.net.Proxy;
+import java.net.ProxySelector;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.net.URLEncoder;
+
+import org.spearce.jgit.awtui.AwtAuthenticator;
+
+/** Extra utilities to support usage of HTTP. */
+public class HttpSupport {
+ /**
+ * Configure the JRE's standard HTTP based on <code>http_proxy</code>.
+ * <p>
+ * The popular libcurl library honors the <code>http_proxy</code>
+ * environment variable as a means of specifying an HTTP proxy for requests
+ * made behind a firewall. This is not natively recognized by the JRE, so
+ * this method can be used by command line utilities to configure the JRE
+ * before the first request is sent.
+ *
+ * @throws MalformedURLException
+ * the value in <code>http_proxy</code> is unsupportable.
+ */
+ public static void configureHttpProxy() throws MalformedURLException {
+ final String s = System.getenv("http_proxy");
+ if (s == null || s.equals(""))
+ return;
+
+ final URL u = new URL(s);
+ if (!"http".equals(u.getProtocol()))
+ throw new MalformedURLException("Invalid http_proxy: " + s
+ + ": Only http supported.");
+
+ final String proxyHost = u.getHost();
+ final int proxyPort = u.getPort();
+
+ System.setProperty("http.proxyHost", proxyHost);
+ if (proxyPort > 0)
+ System.setProperty("http.proxyPort", String.valueOf(proxyPort));
+
+ final String userpass = u.getUserInfo();
+ if (userpass != null && userpass.contains(":")) {
+ final int c = userpass.indexOf(':');
+ final String user = userpass.substring(0, c);
+ final String pass = userpass.substring(c + 1);
+ AwtAuthenticator.add(new AwtAuthenticator.CachedAuthentication(
+ proxyHost, proxyPort, user, pass));
+ }
+ }
+
+ /**
+ * URL encode a value string into an output buffer.
+ *
+ * @param urlstr
+ * the output buffer.
+ * @param key
+ * value which must be encoded to protected special characters.
+ */
+ public static void encode(final StringBuilder urlstr, final String key) {
+ if (key == null || key.length() == 0)
+ return;
+ try {
+ urlstr.append(URLEncoder.encode(key, "UTF-8"));
+ } catch (UnsupportedEncodingException e) {
+ throw new RuntimeException("Could not URL encode to UTF-8", e);
+ }
+ }
+
+ /**
+ * Get the HTTP response code from the request.
+ * <p>
+ * Roughly the same as <code>c.getResponseCode()</code> but the
+ * ConnectException is translated to be more understandable.
+ *
+ * @param c
+ * connection the code should be obtained from.
+ * @return r HTTP status code, usually 200 to indicate success. See
+ * {@link HttpURLConnection} for other defined constants.
+ * @throws IOException
+ * communications error prevented obtaining the response code.
+ */
+ public static int response(final HttpURLConnection c) throws IOException {
+ try {
+ return c.getResponseCode();
+ } catch (ConnectException ce) {
+ final String host = c.getURL().getHost();
+ // The standard J2SE error message is not very useful.
+ //
+ if ("Connection timed out: connect".equals(ce.getMessage()))
+ throw new ConnectException("Connection time out: " + host);
+ throw new ConnectException(ce.getMessage() + " " + host);
+ }
+ }
+
+ /**
+ * Determine the proxy server (if any) needed to obtain a URL.
+ *
+ * @param proxySelector
+ * proxy support for the caller.
+ * @param u
+ * location of the server caller wants to talk to.
+ * @return proxy to communicate with the supplied URL.
+ * @throws ConnectException
+ * the proxy could not be computed as the supplied URL could not
+ * be read. This failure should never occur.
+ */
+ public static Proxy proxyFor(final ProxySelector proxySelector, final URL u)
+ throws ConnectException {
+ try {
+ return proxySelector.select(u.toURI()).get(0);
+ } catch (URISyntaxException e) {
+ final ConnectException err;
+ err = new ConnectException("Cannot determine proxy for " + u);
+ err.initCause(e);
+ throw err;
+ }
+ }
+
+ private HttpSupport() {
+ // Utility class only.
+ }
+}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 19/21] Create a really simple Amazon S3 REST client
2008-06-29 7:59 ` [JGIT PATCH 18/21] Extract the basic HTTP proxy support to its own class Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 20/21] Add client side encryption to Amazon S3 client library Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
This is a very basic client for Amazon's Simple Storage Service (S3).
The client is able to perform the basic FTP like operations necessary
to support storing a Git repository on the S3 servers, assuming the
transport is implemented as a dumb protocol style transport similar
to the existing sftp:// transport.
A tiny command line client is included to facilitate manual testing
against the S3 servers, as well as emergency operations such as
getting content or deleting content.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/transport/AmazonS3.java | 696 ++++++++++++++++++++
1 files changed, 696 insertions(+), 0 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java b/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
new file mode 100644
index 0000000..466d9e9
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
@@ -0,0 +1,696 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.transport;
+
+import java.io.EOFException;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.HttpURLConnection;
+import java.net.Proxy;
+import java.net.ProxySelector;
+import java.net.URL;
+import java.net.URLConnection;
+import java.security.DigestOutputStream;
+import java.security.InvalidKeyException;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TimeZone;
+import java.util.TreeMap;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+
+import org.spearce.jgit.awtui.AwtAuthenticator;
+import org.spearce.jgit.lib.Constants;
+import org.spearce.jgit.util.Base64;
+import org.spearce.jgit.util.HttpSupport;
+import org.spearce.jgit.util.TemporaryBuffer;
+import org.xml.sax.Attributes;
+import org.xml.sax.InputSource;
+import org.xml.sax.SAXException;
+import org.xml.sax.XMLReader;
+import org.xml.sax.helpers.DefaultHandler;
+import org.xml.sax.helpers.XMLReaderFactory;
+
+/**
+ * A simple HTTP REST client for the Amazon S3 service.
+ * <p>
+ * This client uses the REST API to communicate with the Amazon S3 servers and
+ * read or write content through a bucket that the user has access to. It is a
+ * very lightweight implementation of the S3 API and therefore does not have all
+ * of the bells and whistles of popular client implementations.
+ * <p>
+ * Authentication is always performed using the user's AWSAccessKeyId and their
+ * private AWSSecretAccessKey.
+ */
+public class AmazonS3 {
+ private static final Set<String> SIGNED_HEADERS;
+
+ private static final String HMAC = "HmacSHA1";
+
+ private static final String DOMAIN = "s3.amazonaws.com";
+
+ private static final String X_AMZ_ACL = "x-amz-acl";
+
+ static {
+ SIGNED_HEADERS = new HashSet<String>();
+ SIGNED_HEADERS.add("content-type");
+ SIGNED_HEADERS.add("content-md5");
+ SIGNED_HEADERS.add("date");
+ }
+
+ private static boolean isSignedHeader(final String name) {
+ final String nameLC = name.toLowerCase();
+ return SIGNED_HEADERS.contains(nameLC) || nameLC.startsWith("x-amz-");
+ }
+
+ private static String toCleanString(final List<String> list) {
+ final StringBuilder s = new StringBuilder();
+ for (final String v : list) {
+ if (s.length() > 0)
+ s.append(',');
+ s.append(v.replaceAll("\n", "").trim());
+ }
+ return s.toString();
+ }
+
+ private static String remove(final Map<String, String> m, final String k) {
+ final String r = m.remove(k);
+ return r != null ? r : "";
+ }
+
+ private static String httpNow() {
+ final String tz = "GMT";
+ final SimpleDateFormat fmt;
+ fmt = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss", Locale.US);
+ fmt.setTimeZone(TimeZone.getTimeZone(tz));
+ return fmt.format(new Date()) + " " + tz;
+ }
+
+ private static MessageDigest newMD5() {
+ try {
+ return MessageDigest.getInstance("MD5");
+ } catch (NoSuchAlgorithmException e) {
+ throw new RuntimeException("JRE lacks MD5 implementation", e);
+ }
+ }
+
+ /** AWSAccessKeyId, public string that identifies the user's account. */
+ private final String publicKey;
+
+ /** Decoded form of the private AWSSecretAccessKey, to sign requests. */
+ private final SecretKeySpec privateKey;
+
+ /** Our HTTP proxy support, in case we are behind a firewall. */
+ private final ProxySelector proxySelector;
+
+ /** ACL to apply to created objects. */
+ private final String acl;
+
+ /** Maximum number of times to try an operation. */
+ private final int maxAttempts;
+
+ /**
+ * Create a new S3 client for the supplied user information.
+ * <p>
+ * The connection properties are a subset of those supported by the popular
+ * <a href="http://jets3t.s3.amazonaws.com/index.html">jets3t</a> library.
+ * For example:
+ *
+ * <pre>
+ * # AWS Access and Secret Keys (required)
+ * accesskey: <YourAWSAccessKey>
+ * secretkey: <YourAWSSecretKey>
+ *
+ * # Access Control List setting to apply to uploads, must be one of:
+ * # PRIVATE, PUBLIC_READ (defaults to PRIVATE).
+ * acl: PRIVATE
+ *
+ * # Number of times to retry after internal error from S3.
+ * httpclient.retry-max: 3
+ * </pre>
+ *
+ * @param props
+ * connection properties.
+ *
+ */
+ public AmazonS3(final Properties props) {
+ publicKey = props.getProperty("accesskey");
+ if (publicKey == null)
+ throw new IllegalArgumentException("Missing accesskey.");
+
+ final String secret = props.getProperty("secretkey");
+ if (secret == null)
+ throw new IllegalArgumentException("Missing secretkey.");
+ privateKey = new SecretKeySpec(Constants.encodeASCII(secret), HMAC);
+
+ final String pacl = props.getProperty("acl", "PRIVATE");
+ if ("PRIVATE".equalsIgnoreCase(pacl))
+ acl = "private";
+ else if ("PUBLIC".equalsIgnoreCase(pacl))
+ acl = "public-read";
+ else if ("PUBLIC-READ".equalsIgnoreCase(pacl))
+ acl = "public-read";
+ else if ("PUBLIC_READ".equalsIgnoreCase(pacl))
+ acl = "public-read";
+ else
+ throw new IllegalArgumentException("Invalid acl: " + pacl);
+
+ maxAttempts = Integer.parseInt(props.getProperty(
+ "httpclient.retry-max", "3"));
+ proxySelector = ProxySelector.getDefault();
+ }
+
+ /**
+ * Get the content of a bucket object.
+ *
+ * @param bucket
+ * name of the bucket storing the object.
+ * @param key
+ * key of the object within its bucket.
+ * @return connection to stream the content of the object. The request
+ * properties of the connection may not be modified by the caller as
+ * the request parameters have already been signed.
+ * @throws IOException
+ * sending the request was not possible.
+ */
+ public URLConnection get(final String bucket, final String key)
+ throws IOException {
+ for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
+ final HttpURLConnection c = open("GET", bucket, key);
+ authorize(c);
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_OK:
+ return c;
+ case HttpURLConnection.HTTP_NOT_FOUND:
+ throw new FileNotFoundException(key);
+ case HttpURLConnection.HTTP_INTERNAL_ERROR:
+ continue;
+ default:
+ throw error("Reading", key, c);
+ }
+ }
+ throw maxAttempts("Reading", key);
+ }
+
+ /**
+ * List the names of keys available within a bucket.
+ * <p>
+ * This method is primarily meant for obtaining a "recursive directory
+ * listing" rooted under the specified bucket and prefix location.
+ *
+ * @param bucket
+ * name of the bucket whose objects should be listed.
+ * @param prefix
+ * common prefix to filter the results by. Must not be null.
+ * Supplying the empty string will list all keys in the bucket.
+ * Supplying a non-empty string will act as though a trailing '/'
+ * appears in prefix, even if it does not.
+ * @return list of keys starting with <code>prefix</code>, after removing
+ * <code>prefix</code> (or <code>prefix + "/"</code>)from all
+ * of them.
+ * @throws IOException
+ * sending the request was not possible, or the response XML
+ * document could not be parsed properly.
+ */
+ public List<String> list(final String bucket, String prefix)
+ throws IOException {
+ if (prefix.length() > 0 && !prefix.endsWith("/"))
+ prefix += "/";
+ final ListParser lp = new ListParser(bucket, prefix);
+ do {
+ lp.list();
+ } while (lp.truncated);
+ return lp.entries;
+ }
+
+ /**
+ * Delete a single object.
+ * <p>
+ * Deletion always succeeds, even if the object does not exist.
+ *
+ * @param bucket
+ * name of the bucket storing the object.
+ * @param key
+ * key of the object within its bucket.
+ * @throws IOException
+ * deletion failed due to communications error.
+ */
+ public void delete(final String bucket, final String key)
+ throws IOException {
+ for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
+ final HttpURLConnection c = open("DELETE", bucket, key);
+ authorize(c);
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_NO_CONTENT:
+ return;
+ case HttpURLConnection.HTTP_INTERNAL_ERROR:
+ continue;
+ default:
+ throw error("Deletion", key, c);
+ }
+ }
+ throw maxAttempts("Deletion", key);
+ }
+
+ /**
+ * Atomically create or replace a single small object.
+ * <p>
+ * This form is only suitable for smaller contents, where the caller can
+ * reasonable fit the entire thing into memory.
+ * <p>
+ * End-to-end data integrity is assured by internally computing the MD5
+ * checksum of the supplied data and transmitting the checksum along with
+ * the data itself.
+ *
+ * @param bucket
+ * name of the bucket storing the object.
+ * @param key
+ * key of the object within its bucket.
+ * @param data
+ * new data content for the object. Must not be null. Zero length
+ * array will create a zero length object.
+ * @throws IOException
+ * creation/updating failed due to communications error.
+ */
+ public void put(final String bucket, final String key, final byte[] data)
+ throws IOException {
+ final String md5str = Base64.encodeBytes(newMD5().digest(data));
+ final String lenstr = String.valueOf(data.length);
+ for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
+ final HttpURLConnection c = open("PUT", bucket, key);
+ c.setRequestProperty("Content-Length", lenstr);
+ c.setRequestProperty("Content-MD5", md5str);
+ c.setRequestProperty(X_AMZ_ACL, acl);
+ authorize(c);
+ c.setDoOutput(true);
+ c.setFixedLengthStreamingMode(data.length);
+ final OutputStream os = c.getOutputStream();
+ try {
+ os.write(data);
+ } finally {
+ os.close();
+ }
+
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_OK:
+ return;
+ case HttpURLConnection.HTTP_INTERNAL_ERROR:
+ continue;
+ default:
+ throw error("Writing", key, c);
+ }
+ }
+ throw maxAttempts("Writing", key);
+ }
+
+ /**
+ * Atomically create or replace a single large object.
+ * <p>
+ * Initially the returned output stream buffers data into memory, but if the
+ * total number of written bytes starts to exceed an internal limit the data
+ * is spooled to a temporary file on the local drive.
+ * <p>
+ * Network transmission is attempted only when <code>close()</code> gets
+ * called at the end of output. Closing the returned stream can therefore
+ * take significant time, especially if the written content is very large.
+ * <p>
+ * End-to-end data integrity is assured by internally computing the MD5
+ * checksum of the supplied data and transmitting the checksum along with
+ * the data itself.
+ *
+ * @param bucket
+ * name of the bucket storing the object.
+ * @param key
+ * key of the object within its bucket.
+ * @return a stream which accepts the new data, and transmits once closed.
+ */
+ public OutputStream beginPut(final String bucket, final String key) {
+ final MessageDigest md5 = newMD5();
+ final TemporaryBuffer buffer = new TemporaryBuffer() {
+ @Override
+ public void close() throws IOException {
+ super.close();
+ try {
+ putImpl(bucket, key, md5.digest(), this);
+ } finally {
+ destroy();
+ }
+ }
+ };
+ return new DigestOutputStream(buffer, md5);
+ }
+
+ private void putImpl(final String bucket, final String key,
+ final byte[] csum, final TemporaryBuffer buf) throws IOException {
+ final String md5str = Base64.encodeBytes(csum);
+ final long len = buf.length();
+ final String lenstr = String.valueOf(len);
+ for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
+ final HttpURLConnection c = open("PUT", bucket, key);
+ c.setRequestProperty("Content-Length", lenstr);
+ c.setRequestProperty("Content-MD5", md5str);
+ c.setRequestProperty(X_AMZ_ACL, acl);
+ authorize(c);
+ c.setDoOutput(true);
+ c.setFixedLengthStreamingMode((int) len);
+ final OutputStream os = c.getOutputStream();
+ try {
+ buf.writeTo(os, null);
+ } finally {
+ os.close();
+ }
+
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_OK:
+ return;
+ case HttpURLConnection.HTTP_INTERNAL_ERROR:
+ continue;
+ default:
+ throw error("Writing", key, c);
+ }
+ }
+ throw maxAttempts("Writing", key);
+ }
+
+ private IOException error(final String action, final String key,
+ final HttpURLConnection c) throws IOException {
+ return new IOException(action + " of '" + key + "' failed: "
+ + HttpSupport.response(c) + " " + c.getResponseMessage());
+ }
+
+ private IOException maxAttempts(final String action, final String key) {
+ return new IOException(action + " of '" + key + "' failed:"
+ + " Giving up after " + maxAttempts + " attempts.");
+ }
+
+ private HttpURLConnection open(final String method, final String bucket,
+ final String key) throws IOException {
+ final Map<String, String> noArgs = Collections.emptyMap();
+ return open(method, bucket, key, noArgs);
+ }
+
+ private HttpURLConnection open(final String method, final String bucket,
+ final String key, final Map<String, String> args)
+ throws IOException {
+ final StringBuilder urlstr = new StringBuilder();
+ urlstr.append("http://");
+ urlstr.append(bucket);
+ urlstr.append('.');
+ urlstr.append(DOMAIN);
+ urlstr.append('/');
+ if (key.length() > 0)
+ HttpSupport.encode(urlstr, key);
+ if (!args.isEmpty()) {
+ final Iterator<Map.Entry<String, String>> i;
+
+ urlstr.append('?');
+ i = args.entrySet().iterator();
+ while (i.hasNext()) {
+ final Map.Entry<String, String> e = i.next();
+ urlstr.append(e.getKey());
+ urlstr.append('=');
+ HttpSupport.encode(urlstr, e.getValue());
+ if (i.hasNext())
+ urlstr.append('&');
+ }
+ }
+
+ final URL url = new URL(urlstr.toString());
+ final Proxy proxy = HttpSupport.proxyFor(proxySelector, url);
+ final HttpURLConnection c;
+
+ c = (HttpURLConnection) url.openConnection(proxy);
+ c.setRequestMethod(method);
+ c.setRequestProperty("User-Agent", "jgit/1.0");
+ c.setRequestProperty("Date", httpNow());
+ return c;
+ }
+
+ private void authorize(final HttpURLConnection c) throws IOException {
+ final Map<String, List<String>> reqHdr = c.getRequestProperties();
+ final SortedMap<String, String> sigHdr = new TreeMap<String, String>();
+ for (final String hdr : reqHdr.keySet()) {
+ if (isSignedHeader(hdr))
+ sigHdr.put(hdr.toLowerCase(), toCleanString(reqHdr.get(hdr)));
+ }
+
+ final StringBuilder s = new StringBuilder();
+ s.append(c.getRequestMethod());
+ s.append('\n');
+
+ s.append(remove(sigHdr, "content-md5"));
+ s.append('\n');
+
+ s.append(remove(sigHdr, "content-type"));
+ s.append('\n');
+
+ s.append(remove(sigHdr, "date"));
+ s.append('\n');
+
+ for (final Map.Entry<String, String> e : sigHdr.entrySet()) {
+ s.append(e.getKey());
+ s.append(':');
+ s.append(e.getValue());
+ s.append('\n');
+ }
+
+ final String host = c.getURL().getHost();
+ s.append('/');
+ s.append(host.substring(0, host.length() - DOMAIN.length() - 1));
+ s.append(c.getURL().getPath());
+
+ final String sec;
+ try {
+ final Mac m = Mac.getInstance(HMAC);
+ m.init(privateKey);
+ sec = Base64.encodeBytes(m.doFinal(s.toString().getBytes("UTF-8")));
+ } catch (NoSuchAlgorithmException e) {
+ throw new IOException("No " + HMAC + " support:" + e.getMessage());
+ } catch (InvalidKeyException e) {
+ throw new IOException("Invalid key: " + e.getMessage());
+ }
+ c.setRequestProperty("Authorization", "AWS " + publicKey + ":" + sec);
+ }
+
+ /**
+ * Simple command line interface to {@link AmazonS3}.
+ *
+ * @param argv
+ * command line arguments. See usage for details.
+ * @throws IOException
+ * an error occurred.
+ */
+ public static void main(final String[] argv) throws IOException {
+ if (argv.length != 4) {
+ commandLineUsage();
+ return;
+ }
+
+ AwtAuthenticator.install();
+ HttpSupport.configureHttpProxy();
+
+ final AmazonS3 s3 = new AmazonS3(properties(new File(argv[0])));
+ final String op = argv[1];
+ final String bucket = argv[2];
+ final String key = argv[3];
+ if ("get".equals(op)) {
+ final URLConnection c = s3.get(bucket, key);
+ int len = c.getContentLength();
+ final InputStream in = c.getInputStream();
+ try {
+ final byte[] tmp = new byte[2048];
+ while (len > 0) {
+ final int n = in.read(tmp);
+ if (n < 0)
+ throw new EOFException("Expected " + len + " bytes.");
+ System.out.write(tmp, 0, n);
+ len -= n;
+ }
+ } finally {
+ in.close();
+ }
+ } else if ("ls".equals(op) || "list".equals(op)) {
+ for (final String k : s3.list(bucket, key))
+ System.out.println(k);
+ } else if ("rm".equals(op) || "delete".equals(op)) {
+ s3.delete(bucket, key);
+ } else if ("put".equals(op)) {
+ final OutputStream os = s3.beginPut(bucket, key);
+ final byte[] tmp = new byte[2048];
+ int n;
+ while ((n = System.in.read(tmp)) > 0)
+ os.write(tmp, 0, n);
+ os.close();
+ } else {
+ commandLineUsage();
+ }
+ }
+
+ private static void commandLineUsage() {
+ System.err.println("usage: conn.prop op bucket key");
+ System.err.println();
+ System.err.println(" where conn.prop is a jets3t properties file.");
+ System.err.println(" op is one of: get ls rm put");
+ System.err.println(" bucket is the name of the S3 bucket");
+ System.err.println(" key is the name of the object.");
+ System.exit(1);
+ }
+
+ static Properties properties(final File authFile)
+ throws FileNotFoundException, IOException {
+ final Properties p = new Properties();
+ final FileInputStream in = new FileInputStream(authFile);
+ try {
+ p.load(in);
+ } finally {
+ in.close();
+ }
+ return p;
+ }
+
+ private final class ListParser extends DefaultHandler {
+ final List<String> entries = new ArrayList<String>();
+
+ private final String bucket;
+
+ private final String prefix;
+
+ boolean truncated;
+
+ private StringBuilder data;
+
+ ListParser(final String bn, final String p) {
+ bucket = bn;
+ prefix = p;
+ }
+
+ void list() throws IOException {
+ final Map<String, String> args = new TreeMap<String, String>();
+ if (prefix.length() > 0)
+ args.put("prefix", prefix);
+ if (!entries.isEmpty())
+ args.put("marker", prefix + entries.get(entries.size() - 1));
+
+ for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
+ final HttpURLConnection c = open("GET", bucket, "", args);
+ authorize(c);
+ switch (HttpSupport.response(c)) {
+ case HttpURLConnection.HTTP_OK:
+ truncated = false;
+ data = null;
+
+ final XMLReader xr;
+ try {
+ xr = XMLReaderFactory.createXMLReader();
+ } catch (SAXException e) {
+ throw new IOException("No XML parser available.");
+ }
+ xr.setContentHandler(this);
+ final InputStream in = c.getInputStream();
+ try {
+ xr.parse(new InputSource(in));
+ } catch (SAXException parsingError) {
+ final IOException p;
+ p = new IOException("Error listing " + prefix);
+ p.initCause(parsingError);
+ throw p;
+ } finally {
+ in.close();
+ }
+ return;
+
+ case HttpURLConnection.HTTP_INTERNAL_ERROR:
+ continue;
+
+ default:
+ throw AmazonS3.this.error("Listing", prefix, c);
+ }
+ }
+ throw maxAttempts("Listing", prefix);
+ }
+
+ @Override
+ public void startElement(final String uri, final String name,
+ final String qName, final Attributes attributes)
+ throws SAXException {
+ if ("Key".equals(name) || "IsTruncated".equals(name))
+ data = new StringBuilder();
+ }
+
+ @Override
+ public void ignorableWhitespace(final char[] ch, final int s,
+ final int n) throws SAXException {
+ if (data != null)
+ data.append(ch, s, n);
+ }
+
+ @Override
+ public void characters(final char[] ch, final int s, final int n)
+ throws SAXException {
+ if (data != null)
+ data.append(ch, s, n);
+ }
+
+ @Override
+ public void endElement(final String uri, final String name,
+ final String qName) throws SAXException {
+ if ("Key".equals(name))
+ entries.add(data.toString().substring(prefix.length()));
+ else if ("IsTruncated".equals(name))
+ truncated = "true".equalsIgnoreCase(data.toString());
+ data = null;
+ }
+ }
+}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 20/21] Add client side encryption to Amazon S3 client library
2008-06-29 7:59 ` [JGIT PATCH 19/21] Create a really simple Amazon S3 REST client Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 21/21] Bidirectional protocol support for Amazon S3 Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
By encrypting (and decrypting) all data on the client side we
are able to safely hide the content of our repository from the
owners/operators the Amazon S3 service, making it a secure backup
solution for Git repositories.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/transport/AmazonS3.java | 82 ++++++++-
.../org/spearce/jgit/transport/WalkEncryption.java | 188 ++++++++++++++++++++
2 files changed, 266 insertions(+), 4 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/WalkEncryption.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java b/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
index 466d9e9..4c82967 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/AmazonS3.java
@@ -37,6 +37,7 @@
package org.spearce.jgit.transport;
+import java.io.ByteArrayOutputStream;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
@@ -53,6 +54,7 @@ import java.security.DigestOutputStream;
import java.security.InvalidKeyException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
+import java.security.spec.InvalidKeySpecException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Collections;
@@ -93,6 +95,11 @@ import org.xml.sax.helpers.XMLReaderFactory;
* <p>
* Authentication is always performed using the user's AWSAccessKeyId and their
* private AWSSecretAccessKey.
+ * <p>
+ * Optional client-side encryption may be enabled if requested. The format is
+ * compatible with <a href="http://jets3t.s3.amazonaws.com/index.html">jets3t</a>,
+ * a popular Java based Amazon S3 client library. Enabling encryption can hide
+ * sensitive data from the operators of the S3 service.
*/
public class AmazonS3 {
private static final Set<String> SIGNED_HEADERS;
@@ -103,6 +110,8 @@ public class AmazonS3 {
private static final String X_AMZ_ACL = "x-amz-acl";
+ private static final String X_AMZ_META = "x-amz-meta-";
+
static {
SIGNED_HEADERS = new HashSet<String>();
SIGNED_HEADERS.add("content-type");
@@ -161,6 +170,9 @@ public class AmazonS3 {
/** Maximum number of times to try an operation. */
private final int maxAttempts;
+ /** Encryption algorithm, may be a null instance that provides pass-through. */
+ private final WalkEncryption encryption;
+
/**
* Create a new S3 client for the supplied user information.
* <p>
@@ -179,6 +191,10 @@ public class AmazonS3 {
*
* # Number of times to retry after internal error from S3.
* httpclient.retry-max: 3
+ *
+ * # End-to-end encryption (hides content from S3 owners)
+ * password: <encryption pass-phrase>
+ * crypto.algorithm: PBEWithMD5AndDES
* </pre>
*
* @param props
@@ -207,6 +223,22 @@ public class AmazonS3 {
else
throw new IllegalArgumentException("Invalid acl: " + pacl);
+ try {
+ final String cPas = props.getProperty("password");
+ if (cPas != null) {
+ String cAlg = props.getProperty("crypto.algorithm");
+ if (cAlg == null)
+ cAlg = "PBEWithMD5AndDES";
+ encryption = new WalkEncryption.ObjectEncryptionV2(cAlg, cPas);
+ } else {
+ encryption = WalkEncryption.NONE;
+ }
+ } catch (InvalidKeySpecException e) {
+ throw new IllegalArgumentException("Invalid encryption", e);
+ } catch (NoSuchAlgorithmException e) {
+ throw new IllegalArgumentException("Invalid encryption", e);
+ }
+
maxAttempts = Integer.parseInt(props.getProperty(
"httpclient.retry-max", "3"));
proxySelector = ProxySelector.getDefault();
@@ -232,6 +264,7 @@ public class AmazonS3 {
authorize(c);
switch (HttpSupport.response(c)) {
case HttpURLConnection.HTTP_OK:
+ encryption.validate(c, X_AMZ_META);
return c;
case HttpURLConnection.HTTP_NOT_FOUND:
throw new FileNotFoundException(key);
@@ -245,6 +278,19 @@ public class AmazonS3 {
}
/**
+ * Decrypt an input stream from {@link #get(String, String)}.
+ *
+ * @param u
+ * connection previously created by {@link #get(String, String)}}.
+ * @return stream to read plain text from.
+ * @throws IOException
+ * decryption could not be configured.
+ */
+ public InputStream decrypt(final URLConnection u) throws IOException {
+ return encryption.decrypt(u.getInputStream());
+ }
+
+ /**
* List the names of keys available within a bucket.
* <p>
* This method is primarily meant for obtaining a "recursive directory
@@ -326,6 +372,16 @@ public class AmazonS3 {
*/
public void put(final String bucket, final String key, final byte[] data)
throws IOException {
+ if (encryption != WalkEncryption.NONE) {
+ // We have to copy to produce the cipher text anyway so use
+ // the large object code path as it supports that behavior.
+ //
+ final OutputStream os = beginPut(bucket, key);
+ os.write(data);
+ os.close();
+ return;
+ }
+
final String md5str = Base64.encodeBytes(newMD5().digest(data));
final String lenstr = String.valueOf(data.length);
for (int curAttempt = 0; curAttempt < maxAttempts; curAttempt++) {
@@ -375,8 +431,11 @@ public class AmazonS3 {
* @param key
* key of the object within its bucket.
* @return a stream which accepts the new data, and transmits once closed.
+ * @throws IOException
+ * if encryption was enabled it could not be configured.
*/
- public OutputStream beginPut(final String bucket, final String key) {
+ public OutputStream beginPut(final String bucket, final String key)
+ throws IOException {
final MessageDigest md5 = newMD5();
final TemporaryBuffer buffer = new TemporaryBuffer() {
@Override
@@ -389,7 +448,7 @@ public class AmazonS3 {
}
}
};
- return new DigestOutputStream(buffer, md5);
+ return encryption.encrypt(new DigestOutputStream(buffer, md5));
}
private void putImpl(final String bucket, final String key,
@@ -402,6 +461,7 @@ public class AmazonS3 {
c.setRequestProperty("Content-Length", lenstr);
c.setRequestProperty("Content-MD5", md5str);
c.setRequestProperty(X_AMZ_ACL, acl);
+ encryption.request(c, X_AMZ_META);
authorize(c);
c.setDoOutput(true);
c.setFixedLengthStreamingMode((int) len);
@@ -426,8 +486,22 @@ public class AmazonS3 {
private IOException error(final String action, final String key,
final HttpURLConnection c) throws IOException {
- return new IOException(action + " of '" + key + "' failed: "
- + HttpSupport.response(c) + " " + c.getResponseMessage());
+ final IOException err = new IOException(action + " of '" + key
+ + "' failed: " + HttpSupport.response(c) + " "
+ + c.getResponseMessage());
+ final ByteArrayOutputStream b = new ByteArrayOutputStream();
+ byte[] buf = new byte[2048];
+ for (;;) {
+ final int n = c.getErrorStream().read(buf);
+ if (n < 0)
+ break;
+ if (n > 0)
+ b.write(buf, 0, n);
+ }
+ buf = b.toByteArray();
+ if (buf.length > 0)
+ err.initCause(new IOException("\n" + new String(buf)));
+ return err;
}
private IOException maxAttempts(final String action, final String key) {
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/WalkEncryption.java b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkEncryption.java
new file mode 100644
index 0000000..cec6d75
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/WalkEncryption.java
@@ -0,0 +1,188 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.transport;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.HttpURLConnection;
+import java.security.InvalidAlgorithmParameterException;
+import java.security.InvalidKeyException;
+import java.security.NoSuchAlgorithmException;
+import java.security.spec.InvalidKeySpecException;
+
+import javax.crypto.Cipher;
+import javax.crypto.CipherInputStream;
+import javax.crypto.CipherOutputStream;
+import javax.crypto.NoSuchPaddingException;
+import javax.crypto.SecretKey;
+import javax.crypto.SecretKeyFactory;
+import javax.crypto.spec.PBEKeySpec;
+import javax.crypto.spec.PBEParameterSpec;
+
+abstract class WalkEncryption {
+ static final WalkEncryption NONE = new NoEncryption();
+
+ static final String JETS3T_CRYPTO_VER = "jets3t-crypto-ver";
+
+ static final String JETS3T_CRYPTO_ALG = "jets3t-crypto-alg";
+
+ abstract OutputStream encrypt(OutputStream os) throws IOException;
+
+ abstract InputStream decrypt(InputStream in) throws IOException;
+
+ abstract void request(HttpURLConnection u, String prefix);
+
+ abstract void validate(HttpURLConnection u, String p) throws IOException;
+
+ protected void validateImpl(final HttpURLConnection u, final String p,
+ final String version, final String name) throws IOException {
+ String v;
+
+ v = u.getHeaderField(p + JETS3T_CRYPTO_VER);
+ if (v == null)
+ v = "";
+ if (!version.equals(v))
+ throw new IOException("Unsupported encryption version: " + v);
+
+ v = u.getHeaderField(p + JETS3T_CRYPTO_ALG);
+ if (v == null)
+ v = "";
+ if (!name.equals(v))
+ throw new IOException("Unsupported encryption algorithm: " + v);
+ }
+
+ IOException error(final Throwable why) {
+ final IOException e;
+ e = new IOException("Encryption error: " + why.getMessage());
+ e.initCause(why);
+ return e;
+ }
+
+ private static class NoEncryption extends WalkEncryption {
+ @Override
+ void request(HttpURLConnection u, String prefix) {
+ // Don't store any request properties.
+ }
+
+ @Override
+ void validate(final HttpURLConnection u, final String p)
+ throws IOException {
+ validateImpl(u, p, "", "");
+ }
+
+ @Override
+ InputStream decrypt(InputStream in) {
+ return in;
+ }
+
+ @Override
+ OutputStream encrypt(OutputStream os) {
+ return os;
+ }
+ }
+
+ static class ObjectEncryptionV2 extends WalkEncryption {
+ private static int ITERATION_COUNT = 5000;
+
+ private static byte[] salt = { (byte) 0xA4, (byte) 0x0B, (byte) 0xC8,
+ (byte) 0x34, (byte) 0xD6, (byte) 0x95, (byte) 0xF3, (byte) 0x13 };
+
+ private final String algorithmName;
+
+ private final SecretKey skey;
+
+ private final PBEParameterSpec aspec;
+
+ ObjectEncryptionV2(final String algo, final String key)
+ throws InvalidKeySpecException, NoSuchAlgorithmException {
+ algorithmName = algo;
+
+ final PBEKeySpec s;
+ s = new PBEKeySpec(key.toCharArray(), salt, ITERATION_COUNT, 32);
+ skey = SecretKeyFactory.getInstance(algo).generateSecret(s);
+ aspec = new PBEParameterSpec(salt, ITERATION_COUNT);
+ }
+
+ @Override
+ void request(final HttpURLConnection u, final String prefix) {
+ u.setRequestProperty(prefix + JETS3T_CRYPTO_VER, "2");
+ u.setRequestProperty(prefix + JETS3T_CRYPTO_ALG, algorithmName);
+ }
+
+ @Override
+ void validate(final HttpURLConnection u, final String p)
+ throws IOException {
+ validateImpl(u, p, "2", algorithmName);
+ }
+
+ @Override
+ OutputStream encrypt(final OutputStream os) throws IOException {
+ try {
+ final Cipher c = Cipher.getInstance(algorithmName);
+ c.init(Cipher.ENCRYPT_MODE, skey, aspec);
+ return new CipherOutputStream(os, c);
+ } catch (NoSuchAlgorithmException e) {
+ throw error(e);
+ } catch (NoSuchPaddingException e) {
+ throw error(e);
+ } catch (InvalidKeyException e) {
+ throw error(e);
+ } catch (InvalidAlgorithmParameterException e) {
+ throw error(e);
+ }
+ }
+
+ @Override
+ InputStream decrypt(final InputStream in) throws IOException {
+ try {
+ final Cipher c = Cipher.getInstance(algorithmName);
+ c.init(Cipher.DECRYPT_MODE, skey, aspec);
+ return new CipherInputStream(in, c);
+ } catch (NoSuchAlgorithmException e) {
+ throw error(e);
+ } catch (NoSuchPaddingException e) {
+ throw error(e);
+ } catch (InvalidKeyException e) {
+ throw error(e);
+ } catch (InvalidAlgorithmParameterException e) {
+ throw error(e);
+ }
+ }
+ }
+}
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [JGIT PATCH 21/21] Bidirectional protocol support for Amazon S3
2008-06-29 7:59 ` [JGIT PATCH 20/21] Add client side encryption to Amazon S3 client library Shawn O. Pearce
@ 2008-06-29 7:59 ` Shawn O. Pearce
0 siblings, 0 replies; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 7:59 UTC (permalink / raw)
To: Robin Rosenberg, Marek Zawirski; +Cc: git
The new "amazon-s3://" transport provides bi-directional communication
for Git repositories to the S3 service. This may be useful for backup
of private data which users do not want published to the world.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
---
.../src/org/spearce/jgit/transport/Transport.java | 3 +
.../spearce/jgit/transport/TransportAmazonS3.java | 319 ++++++++++++++++++++
.../src/org/spearce/jgit/transport/URIish.java | 2 +-
3 files changed, 323 insertions(+), 1 deletions(-)
create mode 100644 org.spearce.jgit/src/org/spearce/jgit/transport/TransportAmazonS3.java
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/Transport.java b/org.spearce.jgit/src/org/spearce/jgit/transport/Transport.java
index 5376a9e..b962162 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/Transport.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/Transport.java
@@ -140,6 +140,9 @@ public abstract class Transport {
else if (TransportGitAnon.canHandle(remote))
return new TransportGitAnon(local, remote);
+ else if (TransportAmazonS3.canHandle(remote))
+ return new TransportAmazonS3(local, remote);
+
else if (TransportBundle.canHandle(remote))
return new TransportBundle(local, remote);
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/TransportAmazonS3.java b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportAmazonS3.java
new file mode 100644
index 0000000..ceb6848
--- /dev/null
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/TransportAmazonS3.java
@@ -0,0 +1,319 @@
+/*
+ * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials provided
+ * with the distribution.
+ *
+ * - Neither the name of the Git Development Community nor the
+ * names of its contributors may be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
+ * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.spearce.jgit.transport;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URLConnection;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Properties;
+import java.util.TreeMap;
+
+import org.spearce.jgit.errors.NotSupportedException;
+import org.spearce.jgit.errors.TransportException;
+import org.spearce.jgit.lib.ObjectId;
+import org.spearce.jgit.lib.Ref;
+import org.spearce.jgit.lib.Repository;
+import org.spearce.jgit.lib.Ref.Storage;
+import org.spearce.jgit.util.FS;
+
+/**
+ * Transport over the non-Git aware Amazon S3 protocol.
+ * <p>
+ * This transport communicates with the Amazon S3 servers (a non-free commercial
+ * hosting service that users must subscribe to). Some users may find transport
+ * to and from S3 to be a useful backup service.
+ * <p>
+ * The transport does not require any specialized Git support on the remote
+ * (server side) repository, as Amazon does not provide any such support.
+ * Repository files are retrieved directly through the S3 API, which uses
+ * extended HTTP/1.1 semantics. This make it possible to read or write Git data
+ * from a remote repository that is stored on S3.
+ * <p>
+ * Unlike the HTTP variant (see {@link TransportHttp}) we rely upon being able
+ * to list objects in a bucket, as the S3 API supports this function. By listing
+ * the bucket contents we can avoid relying on <code>objects/info/packs</code>
+ * or <code>info/refs</code> in the remote repository.
+ * <p>
+ * Concurrent pushing over this transport is not supported. Multiple concurrent
+ * push operations may cause confusion in the repository state.
+ *
+ * @see WalkFetchConnection
+ * @see WalkPushConnection
+ */
+class TransportAmazonS3 extends WalkTransport {
+ static final String S3_SCHEME = "amazon-s3";
+
+ static boolean canHandle(final URIish uri) {
+ if (!uri.isRemote())
+ return false;
+ return S3_SCHEME.equals(uri.getScheme());
+ }
+
+ /** User information necessary to connect to S3. */
+ private final AmazonS3 s3;
+
+ /** Bucket the remote repository is stored in. */
+ private final String bucket;
+
+ /**
+ * Key prefix which all objects related to the repository start with.
+ * <p>
+ * The prefix does not start with "/".
+ * <p>
+ * The prefix does not end with "/". The trailing slash is stripped during
+ * the constructor if a trailing slash was supplied in the URIish.
+ * <p>
+ * All files within the remote repository start with
+ * <code>keyPrefix + "/"</code>.
+ */
+ private final String keyPrefix;
+
+ TransportAmazonS3(final Repository local, final URIish uri)
+ throws NotSupportedException {
+ super(local, uri);
+
+ Properties props = null;
+ File propsFile = new File(local.getDirectory(), uri.getUser());
+ if (!propsFile.isFile())
+ propsFile = new File(FS.userHome(), uri.getUser());
+ if (propsFile.isFile()) {
+ try {
+ props = AmazonS3.properties(propsFile);
+ } catch (IOException e) {
+ throw new NotSupportedException("cannot read " + propsFile, e);
+ }
+ } else {
+ props = new Properties();
+ props.setProperty("accesskey", uri.getUser());
+ props.setProperty("secretkey", uri.getPass());
+ }
+
+ s3 = new AmazonS3(props);
+ bucket = uri.getHost();
+
+ String p = uri.getPath();
+ if (p.startsWith("/"))
+ p = p.substring(1);
+ if (p.endsWith("/"))
+ p = p.substring(0, p.length() - 1);
+ keyPrefix = p;
+ }
+
+ @Override
+ public FetchConnection openFetch() throws TransportException {
+ final DatabaseS3 c = new DatabaseS3(bucket, keyPrefix + "/objects");
+ final WalkFetchConnection r = new WalkFetchConnection(this, c);
+ r.available(c.readAdvertisedRefs());
+ return r;
+ }
+
+ @Override
+ public PushConnection openPush() throws TransportException {
+ final DatabaseS3 c = new DatabaseS3(bucket, keyPrefix + "/objects");
+ final WalkPushConnection r = new WalkPushConnection(this, c);
+ r.available(c.readAdvertisedRefs());
+ return r;
+ }
+
+ class DatabaseS3 extends WalkRemoteObjectDatabase {
+ private final String bucketName;
+
+ private final String objectsKey;
+
+ DatabaseS3(final String b, final String o) {
+ bucketName = b;
+ objectsKey = o;
+ }
+
+ private String resolveKey(String subpath) {
+ if (subpath.endsWith("/"))
+ subpath = subpath.substring(0, subpath.length() - 1);
+ String k = objectsKey;
+ while (subpath.startsWith("../")) {
+ k = k.substring(0, k.lastIndexOf('/'));
+ subpath = subpath.substring(3);
+ }
+ return k + "/" + subpath;
+ }
+
+ @Override
+ URIish getURI() {
+ URIish u = new URIish();
+ u = u.setScheme(S3_SCHEME);
+ u = u.setHost(bucketName);
+ u = u.setPath("/" + objectsKey);
+ return u;
+ }
+
+ @Override
+ Collection<WalkRemoteObjectDatabase> getAlternates() throws IOException {
+ try {
+ return readAlternates(INFO_ALTERNATES);
+ } catch (FileNotFoundException err) {
+ // Fall through.
+ }
+ return null;
+ }
+
+ @Override
+ WalkRemoteObjectDatabase openAlternate(final String location)
+ throws IOException {
+ return new DatabaseS3(bucketName, resolveKey(location));
+ }
+
+ @Override
+ Collection<String> getPackNames() throws IOException {
+ final HashSet<String> have = new HashSet<String>();
+ have.addAll(s3.list(bucket, resolveKey("pack")));
+
+ final Collection<String> packs = new ArrayList<String>();
+ for (final String n : have) {
+ if (!n.startsWith("pack-") || !n.endsWith(".pack"))
+ continue;
+
+ final String in = n.substring(0, n.length() - 5) + ".idx";
+ if (have.contains(in))
+ packs.add(n);
+ }
+ return packs;
+ }
+
+ @Override
+ FileStream open(final String path) throws IOException {
+ final URLConnection c = s3.get(bucket, resolveKey(path));
+ final InputStream raw = c.getInputStream();
+ final InputStream in = s3.decrypt(c);
+ final int len = c.getContentLength();
+ return new FileStream(in, raw == in ? len : -1);
+ }
+
+ @Override
+ void deleteFile(final String path) throws IOException {
+ s3.delete(bucket, resolveKey(path));
+ }
+
+ @Override
+ OutputStream writeFile(final String path) throws IOException {
+ return s3.beginPut(bucket, resolveKey(path));
+ }
+
+ @Override
+ void writeFile(final String path, final byte[] data) throws IOException {
+ s3.put(bucket, resolveKey(path), data);
+ }
+
+ Map<String, Ref> readAdvertisedRefs() throws TransportException {
+ final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
+ readPackedRefs(avail);
+ readLooseRefs(avail);
+ readRef(avail, "HEAD");
+ return avail;
+ }
+
+ private void readLooseRefs(final TreeMap<String, Ref> avail)
+ throws TransportException {
+ try {
+ for (final String n : s3.list(bucket, resolveKey("../refs")))
+ readRef(avail, "refs/" + n);
+ } catch (IOException e) {
+ throw new TransportException(getURI(), "cannot list refs", e);
+ }
+ }
+
+ private Ref readRef(final TreeMap<String, Ref> avail, final String rn)
+ throws TransportException {
+ final String s;
+ try {
+ final BufferedReader br = openReader("../" + rn);
+ try {
+ s = br.readLine();
+ } finally {
+ br.close();
+ }
+ } catch (FileNotFoundException noRef) {
+ return null;
+ } catch (IOException err) {
+ throw new TransportException(getURI(), "read ../" + rn, err);
+ }
+
+ if (s == null)
+ throw new TransportException(getURI(), "Empty ref: " + rn);
+
+ if (s.startsWith("ref: ")) {
+ final String target = s.substring("ref: ".length());
+ Ref r = avail.get(target);
+ if (r == null)
+ r = readRef(avail, target);
+ if (r == null)
+ return null;
+ r = new Ref(r.getStorage(), rn, r.getObjectId(), r
+ .getPeeledObjectId());
+ avail.put(r.getName(), r);
+ return r;
+ }
+
+ if (ObjectId.isId(s)) {
+ final Ref r = new Ref(loose(avail.get(rn)), rn, ObjectId
+ .fromString(s));
+ avail.put(r.getName(), r);
+ return r;
+ }
+
+ throw new TransportException(getURI(), "Bad ref: " + rn + ": " + s);
+ }
+
+ private Storage loose(final Ref r) {
+ if (r != null && r.getStorage() == Storage.PACKED)
+ return Storage.LOOSE_PACKED;
+ return Storage.LOOSE;
+ }
+
+ @Override
+ void close() {
+ // We do not maintain persistent connections.
+ }
+ }
+}
diff --git a/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java b/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
index 9e7ca83..8aa5d35 100644
--- a/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
+++ b/org.spearce.jgit/src/org/spearce/jgit/transport/URIish.java
@@ -51,7 +51,7 @@ import java.util.regex.Pattern;
*/
public class URIish {
private static final Pattern FULL_URI = Pattern
- .compile("^(?:([a-z+]+)://(?:([^/]+?)(?::([^/]+?))?@)?(?:([^/]+?))?(?::(\\d+))?)?((?:[A-Za-z]:)?/.+)$");
+ .compile("^(?:([a-z0-9+-]+)://(?:([^/]+?)(?::([^/]+?))?@)?(?:([^/]+?))?(?::(\\d+))?)?((?:[A-Za-z]:)?/.+)$");
private static final Pattern SCP_URI = Pattern
.compile("^(?:([^@]+?)@)?([^:]+?):(.+)$");
--
1.5.6.74.g8a5e
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created
2008-06-29 7:59 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 10/21] Simplify walker transport ref advertisement setup Shawn O. Pearce
@ 2008-06-29 13:51 ` Robin Rosenberg
2008-06-29 14:17 ` Johannes Schindelin
1 sibling, 1 reply; 27+ messages in thread
From: Robin Rosenberg @ 2008-06-29 13:51 UTC (permalink / raw)
To: Shawn O. Pearce; +Cc: Marek Zawirski, git
söndagen den 29 juni 2008 09.59.19 skrev Shawn O. Pearce:
> To efficiently deleted or update a ref we need to know where
> it came from when it was read into the process. If the ref
> is being updated we can usually just write the loose file,
> but if it is being deleted we may need to remove not just a
> loose file but also delete it from the packed-refs.
One could argue that we should not normally just delete a ref, but
mark it as deleted and let git gc delete it when it expires, just like
any old ref, but then we should try to get C Git to do the same. There
was a thread relating to this recently.
-- robi
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility
2008-06-29 7:59 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 17/21] Misc. documentation fixes to Base64 utility Shawn O. Pearce
@ 2008-06-29 13:51 ` Robin Rosenberg
2008-06-29 18:06 ` Shawn O. Pearce
1 sibling, 1 reply; 27+ messages in thread
From: Robin Rosenberg @ 2008-06-29 13:51 UTC (permalink / raw)
To: Shawn O. Pearce; +Cc: Marek Zawirski, git
Dragging in apache commons libraries seems to be too much at this point, one class, so this choice is
understandable at this point.
Other that that Apache Commons are almost ubiquotous these days and so be considered at every
point when we need external code.
-- robin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created
2008-06-29 13:51 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Robin Rosenberg
@ 2008-06-29 14:17 ` Johannes Schindelin
2008-06-29 18:00 ` Shawn O. Pearce
0 siblings, 1 reply; 27+ messages in thread
From: Johannes Schindelin @ 2008-06-29 14:17 UTC (permalink / raw)
To: Robin Rosenberg; +Cc: Shawn O. Pearce, Marek Zawirski, git
[-- Attachment #1: Type: TEXT/PLAIN, Size: 785 bytes --]
Hi,
On Sun, 29 Jun 2008, Robin Rosenberg wrote:
> söndagen den 29 juni 2008 09.59.19 skrev Shawn O. Pearce:
> > To efficiently deleted or update a ref we need to know where it came
> > from when it was read into the process. If the ref is being updated
> > we can usually just write the loose file, but if it is being deleted
> > we may need to remove not just a loose file but also delete it from
> > the packed-refs.
>
> One could argue that we should not normally just delete a ref, but mark
> it as deleted and let git gc delete it when it expires, just like any
> old ref, but then we should try to get C Git to do the same. There was a
> thread relating to this recently.
... but it petered out, so you should consider any ideas in that thread
rejected.
Ciao,
Dscho
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created
2008-06-29 14:17 ` Johannes Schindelin
@ 2008-06-29 18:00 ` Shawn O. Pearce
0 siblings, 0 replies; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 18:00 UTC (permalink / raw)
To: Johannes Schindelin; +Cc: Robin Rosenberg, Marek Zawirski, git
Johannes Schindelin <Johannes.Schindelin@gmx.de> wrote:
> On Sun, 29 Jun 2008, Robin Rosenberg wrote:
>
> > söndagen den 29 juni 2008 09.59.19 skrev Shawn O. Pearce:
> > > To efficiently deleted or update a ref we need to know where it came
> > > from when it was read into the process. If the ref is being updated
> > > we can usually just write the loose file, but if it is being deleted
> > > we may need to remove not just a loose file but also delete it from
> > > the packed-refs.
> >
> > One could argue that we should not normally just delete a ref, but mark
> > it as deleted and let git gc delete it when it expires, just like any
> > old ref, but then we should try to get C Git to do the same. There was a
> > thread relating to this recently.
>
> ... but it petered out, so you should consider any ideas in that thread
> rejected.
Right. Its a nice idea, but until there is a really solid agreement in
the community about how this should be stored on disk, I don't want to
try and implement it in jgit, or in C Git for that matter. And I don't
really care enough to come up with something and reopen the thread myself.
I just realized that the dumb transport push support doesn't delete
the reflog when it deletes the ref. Whoops. That's a problem
if you later try to create a ref where a directory used to be.
Either git will run into errors trying to create the reflog.
--
Shawn.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility
2008-06-29 13:51 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Robin Rosenberg
@ 2008-06-29 18:06 ` Shawn O. Pearce
0 siblings, 0 replies; 27+ messages in thread
From: Shawn O. Pearce @ 2008-06-29 18:06 UTC (permalink / raw)
To: Robin Rosenberg; +Cc: Marek Zawirski, git
Robin Rosenberg <robin.rosenberg@dewire.com> wrote:
>
> Dragging in apache commons libraries seems to be too much at this
> point, one class, so this choice is understandable at this point.
>
> Other that that Apache Commons are almost ubiquotous these days
> and so be considered at every point when we need external code.
True. But I have a nearly allergic reaction to Apache code;
for some reason its always sort of not quite there. Which also
describes jgit so I shouldn't say anything. ;-)
We can always rip this implementation out if we do wind up taking
in the Apache Commons libraries for other support. There is only
1 or two calls sites and it should be easy enough to change over
to the Apache Commons implementation.
--
Shawn.
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2008-06-29 18:07 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-06-29 7:59 [JGIT PATCH 00/21] Push support over SFTP and (encrypted) Amazon S3 Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 01/21] Remove unused index files when WalkFetchConnection closes Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 02/21] Do not show URIish passwords in TransportExceptions Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 03/21] Use PackedObjectInfo as a base class for PackWriter's ObjectToPack Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 04/21] Refactor PackWriter to hold onto the sorted object list Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 05/21] Save the pack checksum after computing it in PackWriter Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 06/21] Allow PackIndexWriter to use any subclass of PackedObjectInfo Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 07/21] Allow PackWriter to create a corresponding index file Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 08/21] Allow PackWriter to prepare object list and compute name before writing Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 10/21] Simplify walker transport ref advertisement setup Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 11/21] Indicate the protocol jgit doesn't support push over Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 12/21] WalkTransport must allow subclasses to implement openPush Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 13/21] Support push over the sftp:// dumb transport Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 14/21] Extract readPackedRefs from TransportSftp for reuse Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 15/21] Specialized byte array output stream for large files Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 17/21] Misc. documentation fixes to Base64 utility Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 18/21] Extract the basic HTTP proxy support to its own class Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 19/21] Create a really simple Amazon S3 REST client Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 20/21] Add client side encryption to Amazon S3 client library Shawn O. Pearce
2008-06-29 7:59 ` [JGIT PATCH 21/21] Bidirectional protocol support for Amazon S3 Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 16/21] Add Robert Harder's public domain Base64 encoding utility Robin Rosenberg
2008-06-29 18:06 ` Shawn O. Pearce
2008-06-29 13:51 ` [JGIT PATCH 09/21] Remember how a Ref was read in from disk and created Robin Rosenberg
2008-06-29 14:17 ` Johannes Schindelin
2008-06-29 18:00 ` Shawn O. Pearce
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).