Wednesday, May 13, 2026

AWS: The State of Account State

 In September 2025, AWS announced that the Account information in the Organizations Service will have a new State field to replace the Status field. Since that date, both fields are available for all Organizations operations, but the Status field is vowed to be removed on September 2026.

When you read such an announcement and you know your code is using the Status field, you project to review your code and update it. So we did quite immediately, but we could not see the new State field when executing our lambdas. So we postponed the update for later.

Recently, I had another look at the problem, and still could not see any State field appearing in lambdas. I tested some call to DescribeAccount within CloudShell, but the field was really there. So I decided to run the following lambda:

import boto3
import botocore

def lambda_handler(event, context):
    print("boto3:", boto3.__version__)
    print("botocore:", botocore.__version__)

    org_client = boto3.client("organizations")
    response = org_client.describe_account(AccountId="123456789012")
    print(response)

I was surprised by the result.

boto3: 1.40.4
botocore: 1.40.4

Those versions were released in August 2025, before the update. CloudShell in my test uses botocore 1.42.72, which is from March this year. When I notified AWS Support about it, they just told me to use a Layer with a more recent botocore included. How long should I keep this temporary workaround?

Tuesday, May 12, 2026

AWS: Duplicates in Search Provisioned Products

 Using our beloved boto3 library, we are looking for the list of all our Provisioned Products in Service Catalog.

sc_client = boto3.client('servicecatalog')
result = sc_client.search_provisioned_products(
    PageSize=20
)

I won't bore you with th code that loops over the result and perform the operation again if we have more than 20 products. But the strange thing, is that wherever we had more than the page size, some products were repeated in the other pages. Thinking of a bug in AWS Service Catalog, we reached out to the Support Team. This is their answer:

This is a known behavior with the SearchProvisionedProducts API when using the default relevance-based sorting. Because results are sorted by relevance, the ordering can shift slightly between paginated requests, which causes duplicates (or occasionally missed items) across pages.

Never heard of relevance-based sorting. Looking at the documentation, there is no mention of it:

SortBy

The sort field. If no value is specified, the results are not sorted. The valid values are arnidname, and lastRecordId.

 Then, Support Team is proposing a solution:

Adding SortBy='createdTime' gives the pagination a stable ordering, so the page token points to a consistent boundary between pages. No more duplicates should appear regardless of how many provisioned products you have.

It is interesting to note that 'createdTime' is not listed in the documentation either. We tried it and it works. So a hidden feature solves a known bug.

Sunday, February 15, 2026

Collectors.toMap does not like null

 This article was originally published on JRoller on November 26, 2015

Some map accept null values, some don't. How do you know? You usually take a look in the javadoc. But what about maps created by streams through the Collectors.toMap? The javadoc does not say. So I tried out. I picked the following code:

 	List<String> player = Arrays.asList("Lebron", "Kobe", "Shaquille");
	List<String> team = Arrays.asList("Cleveland", "Los Angeles", null);
	
	Map<String, String> currentTeam = new HashMap<>();
	for (int i = 0; i < player.size(); i++) {
		currentTeam.put(player.get(i), team.get(i));
	}

Everything works as expected, it inserts the null value into my map. So I tried to convert it to streams (maybe not in the best way):

	Map<String, String> currentTeam = IntStream.range(0, player.size())
		.mapToObj(i -> i)
		.collect(Collectors.toMap(i -> player.get(i), i -> team.get(i)));

Here is what I get:

Exception in thread "main" java.lang.NullPointerException
	at java.util.HashMap.merge(HashMap.java:1216)
	at java.util.stream.Collectors.lambda$toMap$148(Collectors.java:1320)
	at java.util.stream.Collectors$$Lambda$6/149928006.accept(Unknown Source)
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.util.stream.IntPipeline$4$1.accept(IntPipeline.java:250)
	at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
	at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at Test.main(Test.java:23)

Is it using a kind of Map that does not accept null values? Let's check:

   public static <T, K, U>
    Collector<T, ?, Map<K,U>> toMap(Function<? super T, ? extends K> keyMapper,
                                    Function<? super T, ? extends U> valueMapper) {
        return toMap(keyMapper, valueMapper, throwingMerger(), HashMap::new);
    } 

Well, no. It uses a standard HashMap. In the stack, the Exception is thrown from the HashMap.merge() function. So let's have a look:

   @Override
    public V merge(K key, V value,
                   BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
        if (value == null)
            throw new NullPointerException();
	...
    }

So the problem is not in the type of map but in the implementation of the merge. The javadoc for the HashMap merge() method says that the value parameter is "the non-null value to be merged with the existing value associated with the key". So yes, it says it in the Javadoc, but not where you would expect it.

By the way, if you really want to make it work with streams, you would have to supply your own collector (as an aside note, the same problem arises with the groupBy collector) :

	Map<String, String> currentTeam = IntStream.range(0, player.size())
		.mapToObj(i -> i)
		.collect(HashMap::new,
			(map, i) -> map.put(player.get(i), team.get(i)),
			HashMap::putAll);

Maybe I'll stick with the loop for this time.

Lambda Before Switch

 This article was originally published in JRoller on September 22, 2015.

We had this method that was doing the same thing several times in a row. Here is a simplified version:

void resolveAll() {
	a = resolveA();
	if (a == null)
		states.remove(State.A);
	else
		states.add(State.A);

	b = resolveB();
	if (b == null)
		states.remove(State.B);
	else
		states.add(State.B);

	c = resolveC();
	if (c == null)
		states.remove(State.C);
	else
		states.add(State.C);
} 

It's almost the same thing repeated three times (a bit more in the real life code), except for this method resolveX which is different each time, returning a different type of object. We needed to modify the code, but did not feel like having to repeat the change several times. So we resorted to a refactoring. But what to do with this resolveX method? Inner classes are really ugly, so we turned to a switch:

void resolveAll() {
	a = resolve(State.A);
	b = resolve(State.B);
	c = resolve(State.C);
} 
	
@SuppressWarnings("unchecked")
private <T> T resolve(State state) {
	T resolved = null;
	switch (state) {
		case A:
			resolved = (T) resolveA();
			break;
		case B:
			resolved = (T) resolveB();
			break;
		case C:
			resolved = (T) resolveC();
			break;
	} 

	if (resolved == null) {
		states.remove(state);
	}  else {
		states.add(state);
	} 
	
	return resolved;
} 

Not that much simpler than the original code. However, we are moving slowly to Java 8. With lambdas, inner classes do not look so ugly anymore:

void resolveAll() {
	a = resolve(State.A, this::resolveA);
	b = resolve(State.B, this::resolveB);
	c = resolve(State.C, this::resolveC);
} 

private <T> T resolve(State state, Supplier<T> resolver) {
	T resolved = resolver.get();
	
	if (resolved == null) {
		states.remove(state);
	}  else {
		states.add(state);
	} 
	
	return resolved;
} 

Saturday, October 25, 2025

Thou Shall Close Thy Streams

 This article was originally posted on JRoller on March 12, 2015

You should always close your IO streams. Said like this, it sounds obvious. But in the light of some new Java 8 features, it took me some time to get around it.

I needed to write a small method for modifying a CSV file, basically change the last 1 of each line into a 0. Not really difficult. I would create a temporary file where I'll copy the original lines with the needed modification, then overwrite the original file with the one I created. Since I could use Java 8, I thought I would use a stream and lambdas. The code looked like this:

try (PrintWriter writer = new PrintWriter(Files.newBufferedWriter(tempFile))) {
	Files.lines(toConvert)
		.map(line -> line.replace(";1", ";0"))
		.forEach(line -> writer.println(line));
} 

Files.move(tempFile, toConvert, StandardCopyOption.ATOMIC_MOVE,
	StandardCopyOption.REPLACE_EXISTING);

Looks nice, except I had this weird java.nio.file.FileSystemException with the message "The process cannot access the file because it is being used by another process.". I was pretty sure that the only process using my file was my small program. So the problem was that the Files.lines() does not close the file. I found other references on the net to comfort my idea. So yes, I know, you can find it in the javadocs, and yes, the stream is autocloseable. So the way to go is the following:

try (Stream<String> reader = Files.lines(toConvert)) {
	reader.map(line -> line.replace(";1", ";0"))
		.forEach(line -> writer.println(line));
}

But to my defense, I'm not the only one having problems with the javadocs: https://bugs.openjdk.java.net/browse/JDK-8073923

Autoclose Lock

 This article was originally posted on JRoller on October 21, 2014

I was just wondering if there is a difference between the classic:

	lock.lock();
	try {
		 //I have the lock!
	}  finally {
		lock.unlock();
	} 

And the Autocloseable version:

	lock.lock();
	try (AutoCloseable auto = lock::unlock) {
		 //I have the lock!
	} 

Sunday, September 21, 2025

Capital Date Mistake

 This article was originally pusblished on JRoller on April 10, 2014

Here is a small piece of code. Can you tell what it prints?

  SimpleDateFormat sdf = new SimpleDateFormat("YYYY-MM-dd");
  Calendar cal = Calendar.getInstance();
  cal.set(Calendar.YEAR, 2014);
  cal.set(Calendar.MONTH, Calendar.DECEMBER);
  cal.set(Calendar.DAY_OF_MONTH, 31);
  Date d = cal.getTime();
  System.out.println(sdf.format(d));

If you naively answered "2014-12-31", then I would tell you just this: you are really naive.

If you run this code under Java 6, you will get back an IllegalArgumentException, with the message "Illegal pattern character 'Y'". Now it might hit you that, indeed, the character for Year in a DateFormat is the lower case 'y'.

However, this code runs under Java 7, because the capital 'Y' was added to the DateFormat. But it does not stant for Year, but for Week Year. If, like me, you do not know what a Week Year is, here is the JavaDoc explanation:

A week year is in sync with a WEEK_OF_YEAR cycle. All weeks between the first and last weeks (inclusive) have the same week year value. Therefore, the first and last days of a week year may have different calendar year values.
For example, January 1, 1998 is a Thursday. If getFirstDayOfWeek() is MONDAY and getMinimalDaysInFirstWeek() is 4 (ISO 8601 standard compatible setting), then week 1 of 1998 starts on December 29, 1997, and ends on January 4, 1998. The week year is 1998 for the last three days of calendar year 1997. If, however, getFirstDayOfWeek() is SUNDAY, then week 1 of 1998 starts on January 4, 1998, and ends on January 10, 1998; the first three days of 1998 then are part of week 53 of 1997 and their week year is 1997.

In short, my exemple code will print "2015-12-31", because the last days of the year belong to a week of the following year.